physics as study of ignorance

Contemporary physics is based on the following three main sets of principles:

  1. Variational Principles

  2. Statistical Principles

  3. Symmetry Principles

Various combinations of these principles led to the birth of the following fields:

  • Study of Classical Mechanics (1)

  • Study of Statistical Mechanics (2)

  • Study of Group and Representation Theory (3)

  • Study of Path Integrals (1 + 2)

  • Study of Gauge Theory (1 + 3)

  • Study of Critical Phenomena (2 + 3)

  • Study of Quantum Field Theory (1 + 2 + 3)

Notice that all three sets of principles are based on ignorances that arise from us being inside the structure we are trying to describe. 

  1. Variational Principles arise due to our inability to experience time as a continuum. (Path information is inaccessible.)

  2. Statistical Principles arise due to our inability to experience space as a continuum. (Coarse graining is inevitable.)

  3. Symmetry Principles arise due to our inability to experience spacetime as a whole.  (Transformations are undetectable.)

Since Quantum Field Theory is based on all three principles, it seems like most of the structure we see arises from these sets of ignorances themselves. From the hypothetical outside point of view of God, none of these ignorances are present and therefore none of the entailed structures are present neither.

Study of physics is not complete yet, but its historical progression suggests that its future depends on us discovering new aspects of our ignorances:

  1. Variational Principles were discovered in the 18th Century.

  2. Statistical Principles were discovered in the 19th Century.

  3. Symmetry Principles were discovered in the 20th Century.

The million dollar question is what principle we will discover in the 21st Century. Will it help us merge General Relativity with Quantum Field Theory or simply lead to the birth of brand new fields of study?

emergence of life

Cardiac rhythm is a good example of a network that includes DNA only as a source of protein templates, not as an integral part of the oscillation network. If proteins were not degraded and needing replenishment, the oscillation could continue indefinitely with no involvement of DNA...

Functional networks can therefore float free, as it were, of their DNA databases. Those databases are then used to replenish the set of proteins as they become degraded. That raises several more important questions. Which evolved first: the networks or the genomes? As we have seen, attractors, including oscillators, form naturally within networks of interacting components, even if these networks start off relatively uniform and unstructured. There is no DNA, or any equivalent, for a spiral galaxy or for a tornado. It is very likely, therefore, that networks of some kinds evolved first. They could have done so even before the evolution of DNA. Those networks could have existed by using RNA as the catalysts. Many people think there was an RNA world before the DNA-protein world. And before that? No one knows, but perhaps the first networks were without catalysts and so very slow. Catalysts speed-up reactions. They are not essential for the reaction to occur. Without catalysts, however, the processes would occur extremely slowly. It seems likely that the earliest forms of life did have very slow networks, and also likely that the earliest catalysts would have been in the rocks of the Earth. Some of the elements of those rocks are now to be found as metal atoms (trace elements) forming important parts of modern enzymes.

Noble - Dance to the Tune of Life (Pages 83, 86)

Darwin unlocked evolution by understanding its slow nature. (He was inspired by the recent geological discoveries indicating that water - given enough time - can carve out entire canyons.) Today we are still under the influence of a similar Pre-Darwinian bias. Just as we were biased in favor of fast changes (and could not see the slow moving waves of evolution), we are biased in favor of fast entities. (Of course, what is fast or slow is defined with respect to the rate of our own metabolisms.) For instance, we get surprised when we see a fast-forwarded video of growing plants, because we equate life with motion and regard slow moving life forms as inferior.

Evolution favors the fast and therefore life is becoming increasingly faster at an increasingly faster rate. Imagine catalyzed reactions, myelinated neurons etc. Replication is another such accelerator technology. Although we tend to view it as a must-have quality of life, what is really important for the definition of life is repeating "patterns” and such patterns can emerge without any replication mechanisms. In other words, what matters is persistence. Replication mechanisms speed up the evolution of new forms of persistence. That is all. Let me reiterate again: Evolution has only two ingredients, constant variation and constant selection. (See Evolution as a Physical Theory post) Replication is not fundamental.

Unfortunately most people still think that replicators came first and led to the emergence of functional (metabolic) networks later, although this order is extremely unlikely since replicators have an error-correction problem and need supportive taming mechanisms (e.g. metabolic networks) right from the start.

In our present state of ignorance, we have a choice between two contrasting images to represent our view of the possible structure of a creature newly emerged at the first threshold of life. One image is the replicator model of Eigen, a molecular structure tightly linked and centrally controlled, replicating itself with considerable precision, achieving homeostasis by strict adherence to a rigid pattern. The other image is the "tangled bank" of Darwin, an image which Darwin put at the end of his Origin of Species to make vivid his answer to the question, What is Life?, an image of grasses and flowers and bees and butterflies growing in tangled profusion without any discernible pattern, achieving homeostasis by means of a web of interdependences too complicated for us to unravel.

The tangled bank is the image which I have in mind when I try to imagine what a primeval cell would look like. I imagine a collection of molecular species, tangled and interlocking like the plants and insects in Darwin's microcosm. This was the image which led me to think of error tolerance as the primary requirement for a model of a molecular population taking its first faltering steps toward life. Error tolerance is the hallmark of natural ecological communities, of free market economies and of open societies. I believe it must have been a primary quality of life from the very beginning. But replication and error tolerance are naturally antagonistic principles. That is why I like to exclude replication from the beginnings of life, to imagine the first cells as error-tolerant tangles of non-replicating molecules, and to introduce replication as an alien parasitic intrusion at a later stage. Only after the alien intruder has been tamed, the reconciliation between replication and error tolerance is achieved in a higher synthesis, through the evolution of the genetic code and the modern genetic apparatus.

The modern synthesis reconciles replication with error tolerance by establishing the division of labor between hardware and software, between the genetic apparatus and the gene. In the modem cell, the hardware of the genetic apparatus is rigidly controlled and error-intolerant. The hardware must be error-intolerant in order to maintain the accuracy of replication. But the error tolerance which I like to believe inherent in life from its earliest beginnings has not been lost. The burden of error tolerance has merely been transferred to the software. In the modern cell, with the infrastructure of hardware firmly in place and subject to a strict regime of quality control, the software is free to wander, to make mistakes and occasionally to be creative. The transfer of architectural design from hardware to software allowed the molecular architects to work with a freedom and creativity which their ancestors before the transfer could never have approached.

Dyson - Infinite in All Directions (Pages 92-93)

Notice how Dyson frames replication mechanisms as stabilizers allowing metabolic networks to take even further risks. In other words, replication not only speeds up evolution but also enlarges the configuration space for it. So we see not only more variation per second but also more variation at any given time.

Going back to our original question…

Life was probably unimaginably slow at the beginning. In fact, such life forms are probably still out there. Are spiral galaxies alive for instance? What about the entire universe? We may be just too local and too fast to see the grand patterns.

As Noble points out in the excerpt above, our bodies contain catalyst metals which are remnants of our deep past. Those metals were forged inside stars far away from us and shot across the space via supernova explosions. (This is how all heavy atoms in the universe got formed.) In other words, they used to be participants in vast-scale metabolic networks.

In some sense, life never emerged. It was always there to begin with. It is just speeding up over time and thereby life forms of today are becoming blind to life form of deep yesterdays.

It is really hard not to be mystical about all this. Have you ever felt bad about disrupting repeating patterns for instance, no matter how physical they are? You can literally hurt such patterns. They are the most embryonic forms of life, some of which are as old as those archaic animals who still hang around in the deep oceans. Perhaps we should all work a little on our artistic sensitivities which would in turn probably give rise to a general increase in our moral sensitivities.


How Fast Will Things Get?

Life is a nested hierarchy of complexity layers and the number of these layers increases overtime. We are already forming many layers above ourselves, the most dramatic of which is the entirety of our technological creations, namely what Kevin Kelly calls as Technium.

Without doubt, we will look pathetically slow for the newly emerging electronic forms of life. Just as we have a certain degree of control over the slow-moving plants, they too (will need us but also) harvest us for their own good. (This is already happening as we are becoming more and more glued to our screens.)

But how much faster will things eventually get?

According to the generally accepted theories, our universe started off with a big bang and went through a very fast evolution that resulted in a sudden expansion of space. While physics has since been slowing down, biology (including new electronic forms) is picking up speed at a phenomenal rate.

Of all the sustainable things in the universe, from a planet to a star, from a daisy to an automobile, from a brain to an eye, the thing that is able to conduct the highest density of power - the most energy flowing through a gram of matter each second - lies at the core of your laptop.

Kelly - What Technology Wants (Page 59)

Evolution seems to be taking us to a very strange end, an end that seems to contain life forms that exhibit features that are very much like those exhibited by the beginning states of physics, extreme speed and density. (I had brought up this possibility at the end of Evolution as a Physical Theory post as well.)

Of course, flipping this logic, the physical background upon which life is currently unfolding is probably alive as well. I personally believe that this indeed is the case. To understand what I mean, we will first need to make an important conceptual clarification and then dive into Quantum Mechanics.



Autonomy as the Flip-Side of Control

Autonomy and control are two sides of the same coin, just like one man's freedom fighter is always another man's terrorist. In particular, what we can not exert any control over looks completely autonomous to us.

But how do you measure autonomy?

Firstly, notice that autonomy is a relative concept. In other words, nothing can be autonomous in and of itself. Secondly, the degree of autonomy correlates with the degree of unanticipatability. For instance, something will look completely autonomous to you only if you can not model its behavior at all. But how would such a behavior literally look like, any guesses? Yes, that is right, it would look completely random.

Random often means inability to predict... A random series should show no discernible pattern, and if one is perceived then the random nature of the series is denied. However, the inability to discern a pattern is no guarantee of true randomness, but only a limitation of the ability to see a pattern... A series of ones and noughts may appear quite random for use as a sequence against which to compare the tossing of a coin, head equals one, tails nought, but it also might be the binary code version of a well known song and therefore perfectly predictable and full of pattern to someone familiar with binary notation.

Shallis - On Time (Pages 122-124)

The fact that randomness is in the eye of the beholder (and that absolute randomness is an ill-defined notion) is the central tenet of Bayesian school of probability. The spirit is also similar to how randomness is defined in algorithmic complexity theory, which I do not find surprising at all since computer scientists are empiricists at heart.

Kolmogorov randomness defines a string (usually of bits) as being random if and only if it is shorter than any computer program that can produce that string. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program whose length is shorter than the length of the string itself. A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.

Wikipedia - Kolmogorov Complexity

Here a completely different terminology is used to say basically the same thing:

  • “compressibility” = “explanability” = “anticipatability”

  • “randomness can only be defined relative to a specific choice of a universal computer” = “randomness is in the eye of the beholder”



Quantum Autonomy

Quantum Mechanics has randomness built into its very foundations. Whether this randomness is absolute or the theory itself is currently incomplete is not relevant. There is a maximal degree of unanticipatability (i.e. autonomy) in Quantum Mechanics and it is practically uncircumventable. (Even the most deterministic interpretations of Quantum Mechanics lean back on artificially introduced stochastic background fields.)

Individually quantum collapses are completely unpredictable, but collectively they exhibit a pattern over time. (For more on such structured forms of randomness, read this older blog post.) This is actually what allows us to tame the autonomy of quantum states in practice: Although we can not exert any control over them at any point in time, we can control their behavior over a period of time. Of course, as life evolves and gets faster (as pointed out in the beginning of this post), it will be able to probe time periods at more and more frequent rates and thereby tighten its grip on quantum phenomena increasingly more.

Another way to view maximal unanticipatability is to frame it as maximal complexity. Remember that every new complexity layer emerges through a complexification process. Once a functional network with a boundary becomes complex enough, it starts to behave more like an “actor” with teleological tendencies. Once it becomes ubiquitous enough, it starts to display an ensemble-behavior of its own, forming a higher layer of complexity and hiding away its own internal complexities. All fundamentally unanticipatable phenomena in nature are instances of such actors who seem to have a sense of unity (a form of consciousness?) that they “want” to preserve.

Why should quantum phenomena be an exception? Perhaps Einstein was right and God does not play dice, and that there are experimentally inaccessible deeper levels of reality from which quantum phenomena emerge? (Bohm was also thinking this way.) Perhaps it is turtles all the way down (and up)?

Universe as a Collection of Nested Autonomies

Fighting for power is the same thing as fighting for control, and gaining control of something necessitates outgrowing the complexity of that thing. That is essentially why life is becoming more complex and autonomous over time.

Although each complexity layer can accommodate a similar level of maximal complexity within itself before starting to spontaneously form a new layer above itself, due to the nested nature of these layers, total complexity rises as new layers emerge. (e.g. We are more complex than our cells since we contain their complexity as well.)

It is not surprising that social sciences are much less successful than natural sciences. Humans are not that great at modeling other humans. This is expected. You need to out-compete in complexity what you desire to anticipate. Each layer can hope to anticipate only the layers below it. Brains are not complex enough to understand themselves. (It is amazing how we equate smartness with the ability to reason about lower layers like physics, chemistry etc. Social reasoning is actually much more sophisticated, but we look down on it since we are naturally endowed with it.)

Side Note: Generally speaking, each layer can have generative effects only upwards and restrictive effects only downwards. Generative effects can be bad for you as in having cancer cells and restrictive effects can be good for you as in having a great boss. Generative effects may falsely look restrictive in the sense that what generates you locks you in form, but it is actually these effects themselves which enable the exploration of the form space in the first place. Think at a population level, not at an individual level. Truth resides there.

Notice that as you move up to higher levels, autonomy becomes harder to describe. Quantum Mechanics, which currently seems to be the lowest level of autonomy, is open to mathematical scrutiny, but higher levels can only be simulated via computational methods and are not analytically accessible.

I know, you want to ask “What about General Relativity? It describes higher level phenomena.” My answer to that would be “No, it does not.”

General Relativity does not model a higher level complexity. It may be very useful today but it will become increasingly irrelevant as life dominates the universe. As autonomy levels increase all over, trying to predict galactic dynamics with General Relativity will be as funny and futile as using Fluid Dynamics to predict the future carbon dioxide levels in the atmosphere without taking into consideration the role of human beings. General Relativity models the aggregate dynamics of quantum “decisions” made at the lowest autonomy level. (We refer to this level-zero as “physics”.) It is predictive as long as higher autonomy levels do not interfere.

God as the Highest Level of Autonomy

The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God.

Freeman Dyson - Progress in Religion

I remember the moment when I ran into this exhilarating paragraph of Dyson. It was so relieving to find such a high-caliber thinker who also interprets quantum randomness as choice-making. Nevertheless, with all due respect, I would like to clarify two points that I hope will help you understand Dyson’s own personal theology from the point of view of the philosophy outlined in this post.

  • There are many many levels of autonomies. Dyson points out only the most obvious three. (He calls them “minds” rather than autonomies.)

    • Atomic. Quantum autonomy is extremely pure and in your face.

    • Human. A belief in our own autonomy comes almost by default.

    • Cosmic. Universe as a whole feels beyond our understanding.

  • Dyson defines God as “what mind becomes when it has passed beyond the scale of our comprehension” and then he refers to the entirety of the universe as God as well. I on the other hand would have defined God as the top level autonomy and not referred to human beings or the universe at all, for the following two reasons:

    • God should not be human centric. Each level should be able to talk about its own God. (There are many things out there that would count you as part of their God.)

      • Remember that the levels below you can exert only generative efforts towards you. It is only the above-levels that can restrict you. In other words, God is what constraints you. Hence, striving for freedom is equivalent to striving for Godlessness. (It is no surprise that people turn more religious when they are physically weak or mentally susceptible.) Of course, complete freedom is an unachievable fantasy. What makes humans human is the nurturing (i.e. controlling) cultural texture they are born into. In fact, human babies can not even survive without a minimal degree of parental and cultural intervention. (Next time you look into your parents’ eyes, remember that part of your God resides in there.) Of course, we also have a certain degree of freedom in choosing what to be governed by. (Some let money govern them for instance.) At the end of the day, God is a social phenomenon. Every single higher level structure we create (e.g. governments selected by our votes, algorithms trained on our data) governs us back. Even the ideas and feelings we restrict ourselves by arise via our interactions with others and do not exist in a vacuum.

    • Most of the universe currently seems to exhibit only the lowest level of autonomy. Not everywhere is equally alive.

      • However, as autonomy reaches higher levels, it will expand in size as well, due to the nested and expansionary nature of complexity generation. (Atomic autonomy lacks extensiveness in the most extreme sense.) So eventually the top level autonomy should grow in size and seize the whole of reality. What happens then? How can such an unfathomable entity exercise control over the entire universe, including itself? Is not auto-control paradoxical in the sense that one can not out-compete in complexity oneself? We should not expect to be able to answer such tough questions, just like we do not expect a stomach cell to understand human consciousness. Higher forms of life will be wildly different and smarter than us. (For instance, I bet that they will be able to manipulate the spacetime fabric which seems to be an emergent phenomenon.) In some sense, it is not surprising that there is such a proliferation of religions. God is meant to be beyond our comprehension.

Four men, who had been blind from birth, wanted to know what an elephant was like; so they asked an elephant-driver for information. He led them to an elephant, and invited them to examine it; so one man felt the elephant's leg, another its trunk, another its tail and the fourth its ear. Then they attempted to describe the elephant to one another. The first man said ”The elephant is like a tree”. ”No,” said the second, ”the elephant is like a snake“. “Nonsense!” said the third, “the elephant is like a broom”. ”You are all wrong,” said the fourth, ”the elephant is like a fan”. And so they went on arguing amongst themselves, while the elephant stood watching them quietly.

- The Indian folklore story of the blind men and the elephant, as adapted from E. J. Robinson’s Tales and Poems of South India by P. T. Johnstone in the Preface of Sketches of an Elephant

reality and analytical inquiry

What is real and out there? This question is surprisingly hard to answer.

The only way we seem to be able to define ontology is as shared epistemology. (Every other definition suffers from an immediate deficiency.) In other words, what is real is what every possible point of view agrees upon, and vice versa. There is no such thing as your reality. (Note that this definition breaks the duality between ontology and epistemology. The moment you make inferences about the former, it gets subsumed by the latter. Is this surprising? Epistemology is all about making inferences. In other words, the scientific method itself is what is breaking the duality.)

Now we have a big problem: Ontological changes can not be communicated to all points of view at the same time in an instantaneous manner. This is outlawed by the finiteness of the speed of the fastest causation propagator which is usually taken as light. In fact, according to our current understanding of physics, there seems to be nothing invariant across all points of view. (e.g. Firewall paradox, twin paradox etc.) Whenever we get our hands onto some property, it slips away with the next advance in our theories.

This is a weird situation, an absolute mind-fuck to be honest. If we endorse all points of views, we can define ontology but then nothing seems to be real. If we endorse only our point of view, we can not define ontology at all and get trapped in a solipsistic world where every other point of view becomes unreal and other people turn into zombies.

Could all different points of views be part of a single - for lack of a better term “God” - point of view? In this case, our own individual point of view becomes unreal. This is a bad sacrifice indeed, but could it help us salvage reality? Nope… Can the universe observe itself? The question does not even make any sense!

It seems like physics can not start off without assuming a solipsistic worldview, adopting a single coordinate system which can not be sure about the reality of other coordinate systems.

In an older blog post, I had explained how dualities emerge as byproducts of analytical inquiry and thereby artificially split the unity of reality. Here we have a similar situation. The scientific method (i.e. analytical inquiry) is automatically giving rise to solipsism and thereby artificially splitting the unity of reality into considerations from different points of views.

In fact, the notions of duality and solipsism are very related. To see why, let us assume that we have a duality between P and not-P. Then

  • Within a single point of view, nothing can satisfy both P and not-P.

  • No property P stays invariant across all points of views.

Here, the first statement is a logical necessity and the second statement is enforced upon us by our own theories. We will take the second statement as the definition of solipsism.

Equivalently, we could have said

  • If property P holds from the point of view of A and not-P holds from the point of view of B, then A can not be equal to B.

  • For every property P, there exists at least one pair (A,B) such that A is not equal to B and P holds from the point of view of A while not-P holds from the point of view of B.

Now let X be the set of pairs (A,B) such that P holds from the point of view of A and not-P holds from the point of view of B. Also let △ stand for the diagonal set consisting of pairs (A,A). Then the above statements become

  • X can not hit △.

  • X can not miss the complement of △.

Using just mathematical notation we have

  • X ∩ △ = ∅

  • X ∩ △’ ≠ ∅

In other words, dualities and solipsism are defined using the same ingredients! Analytical inquiry gives rise to both at the same time. It supplies you labels to attach to reality (via the above equality) but simultaneously takes the reality away from you (via the above inequality). Good deal, right? After all (only) nothing comes for free!

Phenomena are the things which are empty of inherent existence, and inherent existence is that of which phenomena are empty.

Jeffrey Hopkins - Meditations on Emptiness (Page 9)


Recall that at the beginning post we had defined ontology as shared epistemology. One can also go the other way around and define epistemology as shared ontology. What does this mean?

  • To say that some thing exists we need every mind to agree to it.

  • To say that some statement is true we need every body to obey to it.

This is actually how truth is defined in model theory. A statement is deemed true if only if it holds in every possible embodiment.

In this sense, epistemology-ontology duality mirrors mind-body duality. (For a mind, the reality consists of bodies and what is fundamental is existence. For a body, the reality consist of minds and what is fundamental is truth.) For thousands of years, Western philosophy has been trying to break this duality which has popped up in various forms. Today, for instance, physicists are still debating whether “it” arouse from “bit” or “bit” arose from “it”.

Let us now do another exercise. What is the epistemological counterpart of the ontological statement that there are no invariances in physics?

  • Ontology. There is no single thing that every mind agrees to.

  • Epistemology. There is no single statement that every body obeys to.

Sounds outrageous, right? How come there be no statement that is universally true? Truth is absolute in logic, but relative in physics. We are not allowed to make any universal statements in physics, no matter how trivial.

necessity of dualities

All truths lie between two opposite positions. All dramas unfold between two opposing forces. Dualities are both ubiquitous and fundamental. They shape both our mental and physical worlds.

Here are some examples:

Mental

objective | subjective
rational | emotional
conscious | unconscious
reductive | inductive
absolute | relative
positive | negative
good | evil
beautiful | ugly
masculine | feminine


Physical

deterministic | indeterministic
continuous | discrete
actual | potential
necessary | contingent
inside | outside
infinite | finite
global | local
stable | unstable
reversible | irreversible

Notice that even the above split between the two groups itself is an example of duality.

These dualities arise as an epistemological byproduct of the method of analytical inquiry. That is why they are so thoroughly infused into the languages we use to describe the world around us.

Each relatum constitutive of dipolar conceptual pairs is always contextualized by both the other relatum and the relation as a whole, such that neither the relata (the parts) nor the relation (the whole) can be adequately or meaningfully defined apart from their mutual reference. It is impossible, therefore, to conceptualize one principle in a dipolar pair in abstraction from its counterpart principle. Neither principle can be conceived as "more fundamental than," or "wholly derivative of" the other.

Mutually implicative fundamental principles always find their exemplification in both the conceptual and physical features of experience. One cannot, for example, define either positive or negative numbers apart from their mutual implication; nor can one characterize either pole of a magnet without necessary reference to both its counterpart and the two poles in relation - i.e. the magnet itself. Without this double reference, neither the definiendum nor the definiens relative to the definition of either pole can adequately signify its meaning; neither pole can be understood in complete abstraction from the other.

- Epperson & Zafiris - Foundations of Relational Realism (Page 4)


Various lines of Eastern religious and philosophical thinkers intuited how languages can hide underlying unity by artificially superimposing conceptual dualities (the primary of which is the almighty object-subject duality) and posited the nondual wholesomeness of nature several thousand years before the advent of quantum mechanics. (The analytical route to enlightenment is always longer than the intuitive route.)

Western philosophy on the other hand

  • ignored the mutually implicative nature of all dualities and denied the inaccessibility of wholesomeness of nature to analytical inquiry.

  • got fooled by the precision of mathematics which is after all just another language invented by human beings.

  • confused partial control with understanding and engineering success with ontological precision. (Understanding is a binary parameter, meaning that either you understand something or you do not. Control on the other hand is a continuous parameter, meaning that you can have partial control over something.)

As a result Western philosophers mistook representation as reality and tried to confine truth to one end of each dualism in order to create a unity of representation matching the unity of reality.

Side Note: Hegel was an exception. Like Buddha, he too saw dualities as artificial byproducts of analysis, but unlike him, he suggested that one should transcend them via synthesis. In other words, for Buddha unity resided below and for Hegel unity resided above. (Buddha wanted to peel away complexity to its simplest core, while Hegel wanted to embrace complexity in its entirety.) While Buddha stopped theorizing and started meditating instead, Hegel saw the salvation through higher levels of abstraction via alternating chains of analyses and syntheses. (Buddha wanted to turn off cognition altogether, while Hegel wanted to turn up cognition full-blast.) Perhaps at the end of the day they were both preaching the same thing. After all, at the highest level of abstraction, thinking probably halts and emptiness reigns.

It was first the social thinkers who woke up and revolted against the grand narratives built on such discriminative pursuits of unity. There was just way too much politically and ethically at stake for them. The result was an overreaction, replacing unity with multiplicity and considering all points of views as valid. In other words, the pendulum swung the other way and Western philosophy jumped from one state of deep confusion into another. In fact, this time around the situation was even worse since there was an accompanying deep sense of insecurity as well.

The cacophony spread into hard sciences like physics too. Grand narrations got abandoned in favor of instrumental pragmatism. Generations of new physicists got raised as technicians who basically had no clue about the foundations of their disciplines. The most prominent of them could even publicly make an incredibly naive claim such as “something can spontaneously arise from nothing through a quantum fluctuation” and position it as a non-philosophical and non-religious alternative to existing creation myths.

Just to be clear, I am not trying to argue here in favor of Eastern holistic philosophies over Western analytic philosophies. I am just saying that the analytic approach necessitates us to embrace dualities as two-sided entities, including the duality between holistic and analytic approaches.


Politics experienced a similar swing from conservatism (which hailed unity) towards liberalism (which hailed multiplicity). During this transition, all dualities and boundaries got dissolved in the name of more inclusion and equality. The everlasting dynamism (and the subsequent wisdom) of dipolar conceptual pairs (think of magnetic poles) got killed off in favor of an unsustainable burst in the number of ontologies.

Ironically, liberalism resulted in more sameness in the long run. For instance, the traditional assignment of roles and division of tasks between father and mother got replaced by equal parenting principles applied by genderless parents. Of course, upon the dissolution of the gender dipolarity, the number of parents one can have became flexible as well. Having one parent became as natural as having two, three or four. In other words, parenting became a community affair in its truest sense.

 
Duality.png
 

The even greater irony was that liberalism itself forgot that it represented one extreme end of another duality. It was in a sense a self-defeating doctrine that aimed to destroy all discriminative pursuits of unity except for that of itself. (The only way to “resolve” this paradox is to introduce a conceptual hierarchy among dualities where the higher ones can be used to destroy the lower ones, in a fashion that is similar to how mathematicians deal with Russell’s paradox in set theory.)


Of course, at some point the pendulum will swing back to pursuit of unity again. But while we swing back and forth between unity and multiplicity, we keep skipping the only sources of representational truths, namely the dualities themselves. For some reason we are extremely uncomfortable with the fact that the world can only be represented via mutually implicative principles. We find “one” and “infinity” tolerable but “two” arbitrary and therefore abhorring. (Prevalence of “two” in mathematics and “three” in physics was mentioned in a previous blog post.)

I am personally obsessed with “two”. I look out for dualities everywhere and share the interesting finds here on my blog. In fact, I go even further and try to build my entire life on dualities whose two ends mutually enhance each other every time I visit them.

We should not collapse dualities into unities for the sake of satisfying our sense of belonging. We need to counteract this dangerous sociological tendency using our common sense at the individual level. Choosing one side and joining the groupthink is the easy way out. We should instead strive to carve out our identities by consciously sampling from both sides. In other words, when it comes to complex matters, we should embrace the dualities as a whole and not let them split us apart. (Remember, if something works very well, its dual should also work very well. However, if something is true, its dual has to be wrong. This is exactly what separates theory from reality.)

Of course, it is easy to talk about these matters, but who said that pursuit of truth would be easy?

Perhaps there is no pursuit to speak of unless one is pre-committed to choose a side, and swinging back and forth between the two ends of a dualism is the only way nature can maintain its neutrality without sacrificing its dynamicity? (After all, there is no current without a polarity in the first place.)

Perhaps we should just model our logic after reality (like Hegel wanted to) and rather than expect reality to conform to our logic? (In this way we can have our cake and eat it too!)

classical vs innovative businesses

As you move away from zero-to-one processes, economic activities become more and more sensitive to macroeconomic dynamics.

Think of the economy as a universe. Innovative startups correspond to quantum mechanical phenomena rendering something from nothing. The rest of the economy works classically within the general relativity framework where everything is tightly bound to everything else. To predict your future you need to predict the evolution of everything else as well. This of course is an extremely stressful thing to do. It is much easier to exist outside the tightly bound system and create something from scratch. For instance, you can build a productivity software that will help companies increase their profit margins. In some sense such a software will exist outside time. It will sell whether there is an economic downturn or an upturn.


In classical businesses, forecasting near future is extremely hard. Noise clears out when you look a little further out into the future. But far future is again quite hard to talk about since you start feeling the long term effects of innovation being made today. So difficulty hierarchy looks as follows:

near future > far future > mid future

In innovative businesses, forecasting near future is quite easy. In the long run, everyone agrees that transformation is inevitable. So forecasting far future is hard but still possible. However what is going to happen in mid term is extremely hard to predict. In other words, the above hierarchy gets flipped:

mid future > far future > near future

Notice that what is mid future is actually quite hard to define. It can move around with the wind, so to speak, just as intended by the goddesses of fate in Greek mythology.

In Greek mythology the Moirae were the three Fates, usually depicted as dour spinsters. One Moira spun the thread of a newborn's life. The other Moira counted out the thread’s length. And the third Moira cut the thread at death. A person’s beginning and end were predetermined. But what happened in between was not inevitable. Humans and gods could work within the confines of one's ultimate destiny.

Kevin Kelly - What Technology Wants

I personally find it much more natural to just hold onto near future and far future, and let the middle inflection point dangle around. In other words I prefer working with innovative businesses.

Middle zones are generally speaking always ill-defined, presenting another high level justification for the barbell strategy popularized by Nassim Nicholas Taleb. Mid-term behavior of complex systems is tough to crack. For instance, short-term weather forecasts are highly accurate and long-term climate changes are also quite foreseeable, but what is going to happen in mid-term is anybody’s guess.

Far future always involves “structural” change. Things will definitely change but the change is not of statistical nature. As mentioned earlier, innovative businesses are not affected by the short term statistical (environmental / macro economic) noise. Instead they suffer from mid term statistical noise of the type that phase-transition states exhibit in physics. (Think of turbulence phenomenon.) So the above two difficulty hierarchies can be seen as particular manifestations of the following master hierarchy:

statistical unpredictability > structural unpredictability > predictability


Potential entrepreneurs jumping straight into tech without building any experience in traditional domains are akin to physics students jumping straight into quantum mechanics without learning classical mechanics first. This jump is possible, but also pedagogically problematic. It is much more natural to learn things in the historical order that they were discovered. (Venture capital is a very recent phenomenon.) Understanding the idiosyncrasies and complexities of innovative businesses requires knowledge of how the usual, classical businesses operate.

Moreover, just like quantum states decohere into classical states, innovative businesses behave more and more like classical businesses as they get older and bigger. The word “classical” just means the “new” that has passed the test of time. Similarly, decoherence happens via entanglements, which is basically how time progresses at quantum level.

By the way, this transition is very interesting from an intellectual point of view. For instance, innovative businesses are valued using a revenue multiple, while classical businesses are valued using a profit multiple. When exactly do we start to value a mature innovative business using a profit multiple? How can we tell apart its maturity? When exactly a blue ocean becomes a red one? With the first blood spilled by the death of competitors? Is that an objective measure? After all, it is the investor’s expectations themselves which sustain innovative businesses who burn tons of cash all the time.

Also, notice that, just as all classical businesses were once innovative businesses, all innovative businesses are built upon the stable foundations provided by classical businesses. So we should not think of the relationship as one way. Quantum may become classical, but quantum states are always prepared by classical actors in the first place.


What happens to classical businesses as they get older and bigger? They either evolve or die. Combining this observation with the conclusions of the previous two sections, we deduce that the combined predictability-type timeline of an innovative business becoming a classical one looks as follows:

1
(Innovative) Near Future
Predictability

2
(Innovative) Mid Future
Statistical Unpredictability
(Buckle up. You are about to go through some serious turbulence!)

3
(Innovative) Far Future
Structural Unpredictability
(Congratulations! You successfully landed. Older guys need to evolve or die.)

4
(Classical) Near Future
Statistical Unpredictability
(Wear your suit. There seems to be radiation everywhere on this planet!)

5
(Classical) Mid Future
Predictability

6
(Classical) Far Future
Structural Unpredictability
(New forms of competition landed. You are outdated. Will you evolve or die?)

Notice the alteration between structural and statistical forms of unpredictability over time. Is it coincidental?


Industrial firms thrive on reducing variation (manufacturing errors); creative firms thrive on increasing variation (innovation).
- Patty McCord - How Netflix Reinvented HR

Here Patty’s observation is in line with our analogy. He is basically restating the disparity between the deterministic nature of classical mechanics and the statistical nature of quantum mechanics.

Employees in classical businesses feel like cogs in the wheel, because what needs to be done is already known with great precision and there is nothing preventing the operations to be run with utmost efficiency and predictability. They are (again just like cogs in the wheel) utterly dispensable and replaceable. (Operating in red oceans, these businesses primarily focus on cost minimization rather than revenue maximization.)

Employees in innovative businesses, on the other hand, are given a lot more space to maneuver because they are the driving force behind an evolutionary product-market fit process that is not yet complete (and in some cases will never be complete).


Investment pitches too have quite opposite dynamics for innovative and classical businesses.

  • Innovative businesses raise money from venture capital investors, while classical businesses raise money from private equity investors who belong to a completely different culture.

  • If an entrepreneur prepares a 10 megabyte Excel document for a venture capital, then he will be perceived as delusional and naive. If he does not do the same for a private equity, then he will be perceived as entitled and preposterous.

  • Private equity investors look at data about the past and run statistical, blackbox models. Venture capital investors listen to stories about the future and think in causal, structural models. Remember, classical businesses are at the mercy of macroeconomy and a healthy macroeconomy displays maximum unpredictability. (All predictabilities are arbitraged away.) Whatever remnants of causal thinking left in private equity are mostly about fixing internal operational inefficiencies.

  • The number of reasons for rejecting a private equity investment is more or less equal to the number of reasons for accepting one. In the venture capital world, rejection reasons far outnumber the acceptance reasons.

  • Experienced venture capital investors do not prepare before a pitch. The reason is not that they have a mastery over the subject matter of the entrepreneur’s work, but that there are far too many subject-matter-independent reasons for not making an investment. Private equity investors on the other hand do not have this luxury. They need to be prepared before a pitch because the devil is in the details.

  • For the venture capital investors, it is very hard to tell which company will achieve phenomenal success, but very easy to spot which one will fail miserably. Private equity investors have the opposite problem. They look at companies that have survived for a long time. Hence future-miserable-failures are statistically rare and hard to tell apart.

  • In innovative businesses, founders are (and should be) irreplaceable. In classical businesses, founders are (and should be) replaceable. (Similarly, professionals can successfully turn around failing classical companies, but can never pivot failing innovative companies.)

  • Private equity investors with balls do not shy away from turn-around situations. Venture capital investors with balls do not shy away from pivot situations.

evolution as a physical theory

Evolution has two ingredients, constant variation and constant selection.

Two important observations:

  1. Variation in biology exhibits itself in myriad forms, but they all can be traced back to the second law of thermodynamics, which says that entropy (on average) always increases over time. (It is not a coincidence that Darwin formulated the theory of natural selection in 1850s, around the same time Clausius formulated the second law.)

  2. If you decrease selection pressures, the fitness landscape expands. You see less people dying around you, but you also see more variety at any given time. As we learn to cure and cope with (physical and mental) disorders using advances in (hard and soft) sciences / extend our societal safety nets further / improve our parenting and teaching techniques, more and more people stay alive and functional to go on to mate and reproduce. Progress creates more elbow room for evolution so that it can try out even wilder combinations than before.

    Conversely, if you increase selection pressures, the fitness landscape contracts, but in return the shortened life cycles enable evolution to shuffle through the contracted landscape of possibilities at a higher speed.

    Hence, selection pressure acts like a lever between spatial variation and temporal variation. Decreasing it increases spatial variation and decreases temporal variation, increasing it decreases spatial variation and increases temporal variation.

These observations imply respectively the following:

  1. Evolution never stops since the second law of thermodynamics is always valid.

  2. Remember, Einstein discovered that space and time by themselves are not invariant, only spacetime as a whole is. Similarly, evolution may slow down or speed up in space or time dimensions, but is always a constant at spacetime level. In other words, the natural setting for evolution is spacetime.

It is not surprising that thermodynamics has so far stood out as the odd ball that can not be unified with the rest of physics. Principle of entropy seems to be only half the picture. It needs to be combined with the principle of selection to give rise to a spacetime invariant theory at the level of biological variations. In other words, evolution (i.e. principles of entropy and selection combined together) is more fundamental than thermodynamics from the point of view of physics.

Side Note: The trouble is that the principle of selection is a generative, computational notion and does not lend itself to a structural, mathematical definition. However the same can also be said for the principle of entropy, which looks quite awkward in its current mathematical forms. (Recall from the older post Biology as Computation that biology is primarily driven by computational notions.)

All of our theories in physics, except for thermodynamics, are time symmetric. (i.e. They can not distinguish the past from the future.) Second law of thermodynamics, on the other hand, states that entropy (on average) always increases over time and therefore can (obviously) detect the direction of time. This strange asymmetry actually disappears in the theory of evolution, where something emerges to counterbalance the increasing entropy, namely increasing control.

Side Note: Entropy is said to increase globally but control can only be exercised locally. In other words, control decreases entropy locally by dumping it elsewhere, just like a leaf blower. Of course, you may be wondering how, as finite localized beings, we can formulate any global laws at all. I share the same sentiment because, empirically speaking, we can not distinguish a sufficiently large local counterbalance from a global one. Whenever I talk about the entropy of the whole universe, please take it with a grain of salt. (Formally speaking, thermodynamics is not even defined for open systems. In other words, it can not be globally applied to universes with no peripheries.) We will dig deeper into the global vs local dichotomy in Section 3. (Strictly speaking, thermodynamics can not be applied locally neither since every system is bound to be somewhat open due to our inability to completely control its environment.)


1. Increasing Control

All living beings exploit untapped energy sources to exhibit control and influence the future course of their own evolution.

Any state that is not lowest-energy can be considered semi-stable at best. Eventually, by the second law of thermodynamics, every such state evolves towards the lowest-energy configuration and emits energy as a by-product. By “untapped energy sources” I mean such extractable pockets of energy.

So, put more succinctly, all living beings harness entropy to reduce entropy.

The accumulative effect of their efforts over long periods of time has so far been quite dramatic indeed: What basically started out as simple RNA-based structures floating uncontrollably in oceans eventually turned into human beings proposing geo-engineering solutions to the global climate problems they themselves have created.

Let us now look at two interesting internal examples.


1.1. Cognitive Example

Our brains continuously make predictions and proactively interpolate from sensory data flow. In fact, when the higher (more abstract) layers of our neural networks lose the ability to project information downwards and become solely information-receivers, we slip into a comatose state.

Our predictive mental models slowly decay due to entropy (That is why blind people gradually lose their abilities to dream.) and are also at constant risk of becoming irrelevant. To address these problems, our brains continuously reconstruct the models in the light of new triggers and revise them in the light of new evidence. If they did not exercise such self-control, we would be stuck in an echo chamber of slowly decaying mental creations of our own. (That is why schizophrenic people gradually lose touch with reality.)

Autism and schizophrenia can be interpreted as imbalances in this controlled hallucination mechanism and be thought of as inverses of each other, causing respectively too much control and too much hallucination:

Aspects of autism, for instance, might be characterized by an inability to ignore prediction errors relating to sensory signals at the lowest levels of the brain’s processing hierarchy. That could lead to a preoccupation with sensations, a need for repetition and predictability, sensitivity to certain illusions, and other effects. The reverse might be true in conditions that are associated with hallucinations, like schizophrenia: The brain may pay too much attention to its own predictions about what is going on and not enough to sensory information that contradicts those predictions.

Jordana Cepelewicz - To Make Sense of the Present, Brains May Predict the Future


1.2. Genomic Example

Since only 2 percent of our DNA actually codes for proteins, the remaining 98 percent was initially called “junk DNA” which later proved to be a wild misnomer. Today we know that this junk part performs myriad of interesting functions.

For instance, one thing it does for sure is to insulate the precious 2 percent from genetic drift by decreasing the probability of a mutation event to cause critical damage.

Side Note: It is amazing how evolution has managed to diminish the coding region down to 2 percent (without sacrificing any functionality) by getting more and more dexterous at exposing the right coding regions (for gene expression) at the right time. This has resulted in greater variability of gene expression rates across different cellular contexts.

Remember (from our previous remarks) that if you decrease selection pressure, spatial variation increases and temporal variation decreases. Nature achieves this feat via an important intermediary mechanism. To understand this mechanism, first observe the following:

  1. Ability to decrease selection pressure requires greater control over the environment and decreased selection pressure entails longer life span.

  2. Exerting greater control over the environment requires more complex beings.

  3. More complexity and longer life span entail respectively greater fragility towards and longer exposure-time to random mutation events.

  4. This increased susceptibility to randomness in turn necessitates more protective control over genomes.

Since an expansion in the fitness landscape is worthless unless you can roam around on it, greater control exerted at phenotypical level is useless without greater control exerted at genotypical level. In other words, as we channel the speed of evolution from the temporal to the spatial dimension, we need to drive more carefully to make it safely home. From this point of view, it is not surprising at all that the percentage of non-coding DNA of a species is generally correlated with its “complexity”.

I used quotation marks here since there is no generally-agreed-upon, well-defined notion of complexity in biology. But one thing we know for sure is that evolution generates more and more of it over time.


2. Increasing Complexity

Evolution is good at finding out efficient solutions but bad at simplification. As time passes by, both ecosystems and their participants become more complex.

Currently we (as human beings) are by far the greatest complexity generators in the universe. This sounds wildly anthropocentric of course, but when it comes to complexity, we are really the king of the universe.


2.1 Positive Feedback between Control and Complexity

Control and complexity are more or less two sides of the same coin. They always coexist because of the following strong positive feedback mechanism between them:

  • Greater control for you implies more selection pressure for everyone else. In other words, at the aggregate level, greater control increases selection pressure and thereby generates more complexity. (This observation is similar to saying that greater competition makes everyone stronger.)

  • How can you assert more control in an environment that has just become more complex? You need to increase your own complexity so that you can get a handle on things again. (This observation is similar to saying that human brain will never be intelligent enough to understand itself.)


2.2. Positive Feedback between Higher and Lower Complexity Levels

All ecological networks are stratified into several levels:

  • Internally speaking, each human being is an ecology onto himself, consisting of ten of trillions of cells, coexisting with equally many cells in human bacterial flora. This internal ecology is stratified into levels like tissues, organs and organ systems.

  • Externally speaking, each human being is part of a complex ecology that is stratified into many layers that cut across our relationships to each other and to the rest of the biosphere.

Greater complexity generated at higher levels like economics, sociology and psychology propagates all the way down to the cellular level. Conversely, greater complexity generated at a very low level affects all the levels sitting above it. This positive feedback loop accelerates total complexity generation.

Two concrete examples:

  • The notion of an ideal marriage has evolved drastically over time, along with the increasing complexity of our lives. Family as a unit is evolving for survival.

  • Successful people at the frontiers of science, technology, business and art all tend to be quirky and abnormal. (Read the older blog post Success as Abnormality for more details.) Through such people, an expansion of the fitness landscape at the cognitive level propagates up to an expansion at the societal level.


2.3. Positive Correlation between Fragility and Complexity Level

Overall fragility increases as complexity levels are piled up on top of each other. In order to ensure stability, it is necessary for each level to be more robust than the level above it. (Think of the stability of pyramid structures.)

Invention of nucleus by biological evolution is an illustrating example. Prokaryotes (cells without nucleus) are much more open to information (DNA) sharing than the eukaryotes (cells with nucleus) which depend on them. This makes them simpler but also more robust.

It could take eukaryotic organisms a million years to adjust to a change on a worldwide scale that bacteria [prokaryotes] can accommodate in a few years. By constantly and rapidly adapting to environmental conditions, the organisms of the microcosm support the entire biota, their global exchange network ultimately affecting every living plant and animal.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 30)

Whenever you see a long-lasting fragility, look for a source of robustness level below. Just as our mechanical machines and factories are maintained by us, we ourselves are maintained by even more robust networks. Each level should be grateful to the level below. 

Side Note: AI singularity people are funny. They seem to be completely ignorant about the basics of ecology. Supreme AI will be the single most fragile form of life. It can not take over the world. It can merely suffer from an illusion of control, just like we do. You can not destroy or control what is below you in the ecosystem. Survival of each level depends on the freedom of the level below. Just like we depend on the stability provided by freely evolving and information exchanging prokaryotes, supreme AI will depend on the stability provided by us.


2.4. Positive Correlation between Fragility and Firmness of Identity

How limited and rigid life becomes, in a fundamental sense, as it extends down the eukaryotic path. For the macrocosmic size, energy, and complex bodies we enjoy, we trade genetic flexibility. With genetic exchange possible only during reproduction, we are locked into our species, our bodies, and our generation. As it is sometimes expressed in technical terms, we trade genes "vertically" - through the generations - whereas prokaryotes trade them "horizontally" - directly to their neighbors in the same generation. The result is that while genetically fluid bacteria are functionally immortal, in eukaryotes sex becomes linked with death.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 93)

Biological entities that are more protective of their DNA (e.g. eukaryotes whose genes are packed into chromosomes residing inside nuclei) exhibit greater structural permanence. (We had reached a similar conclusion while discussing the junk DNA example in Section 1.2.) Eukaryotes are more precisely defined than prokaryotes, so to speak. Degree of flexibility correlates inversely with firmness of identity.

Firmer the identity gets, the more necessary death becomes. In other words, death is not a destroyer of identity, it is the reason why we can have identity in the first place. I suggest you to meditate on this fact for a while. (It literally changed my view on life.)

  • The reason why we are not at peace with the notion of death is that we are still not aware of how challenging it was for nature to invent the technologies necessary for maintaining identity through time.

  • Fear of death is based on the ego illusion, which Buddha rightly framed as the mother of all misrepresentations about nature. This is the story of a war between life and non-life, between biology and physics, not you against the rest of the universe or your genes against other genes.


3. Physics vs Biology

 
Physics vs Biology.png
 

Physics and biology (with chemistry as the degenerate middle ground) can be thought of as duals of each other, as forces pulling the universe in two opposite directions.

Side Note: Simple design is best done over a short period of time, in a single stroke, with the spirit of a master. Complex design is best done over a long period of time, in small steps, with the spirit of an amateur. That is essentially why physics progresses in a discontinuous manner via single-author papers by non-cooperative genius minds, while biology progresses in a continuous manner via many-author papers by cooperative social minds.


3.1. Entropy, Time and Scale

Note that entropy and time are two sides of the same coin:

  • Time is nothing but motion. Time without any motion is not something that mortals like us can fathom.

  • All motion happens basically due to the initial low-entropy state of the universe and the statistical thermodynamic evolution towards higher entropy states. (Universe somehow began in a very improbable state and now we are paying the “price” for it.) In other words, entropy is the force behind all motion. It is what makes time flow. The rest of physics just defines the degrees of freedom inside which entropy can work its magic (i.e. increase the disorder of the configuration space defined by the degrees of freedom), and specifies how time flow takes place via least action principles which allows one to infer the unique time evolution of a particle or a field from the knowledge of its beginning and ending states.

Side Note: It is not a coincidence that among all physics theories only thermodynamics could not be formulated in terms of a least action principle. Least action principles give you one dimensional (path) information that is inaccessible by experimentation. Basically, each experiment we do allows us to peak at the different time slices of the universe, and each least action principle we have allows us to view each pair of time slices as the beginning and ending states of a unique wholesome causal story. (We can not probe nature continuously.) Entropy on the other hand does not work on a causal basis. (If it did, then it could not be responsible for time flow.) It operates in a primordially acausal fashion.

When we flip the direction of time, thermodynamics starts working backwards and the energy landscape turns upside down. Time-flipped biological entities start harnessing order to create disorder, which is exactly what physics does.

The difference between physics and time-flipped biology is that former operates globally and harnesses the background order that originates from the initial low-entropy state of the universe and latter harnesses local patches of order created by itself. (This is why watching time-flipped physics videos is a lot more fun than watching time-flipped biology videos.)

Side Note: There are nano scale examples of biology harnessing order to create disorder. This is allowed by the statistical nature of the second law of thermodynamics which says that entropy increases only on average. Small divergences may occur over short intervals of time. Large divergences too may occur but they require much longer intervals of time.

The heart of the duality between physics and biology lies in this “global vs local” dichotomy which we will dig deeper in the next section.

It is worth reiterating here the fact that entropy breaks symmetries in the configuration space, not in geometric one. (It may even increase local order in geometric space by creating symmetric arrangements, as in spontaneous crystallisation, which disorders the momentum component of the configuration space via energy release.) Hence, strictly speaking, the “global vs local” dichotomy should not be interpreted purely in spatial terms. What time-flipped biology does is to harness local patches of configurational order (i.e. degrees of freedom associated with those locations), not spatial order.

Side Note: Entropy also triggers the breaking of some structural symmetries along the way. According to inflation theory, as the universe cooled and expanded from its initial hot and dense state, the primordial force split into the four forces (Gravitational, Electromagnetic, Weak Nuclear and Strong Nuclear) that we have today. (Again, as mentioned before, entropy is an odd ball among all physics theories and is not regarded as a force since it does not have an associated field etc.) This de-unification happened through a series of three spontaneous symmetry breakings, each of which took place at a different temperature threshold.

3.2. Entropy and Dynamical Scale Invariance

Imagine a very low-entropy universe that consists of an equal number of zeros and ones which are neatly separated into two groups. (This is a fantasy world with no forces. In other words, the only thing you can randomize is position. So the configuration space just consists of the real space since there are no other degrees of freedom.) Global uniformity of such a universe would be low, since there will be only fifty percent probability that any two randomly chosen local patches will look like each other. Local uniformity on the other hand would be high, since all local patches (except for those centered at the borderline separating the two groups) will either have a homogenous set of zeros or a homogenous set of ones.

Entropy can be seen as a local operator breaking local uniformities in the configuration space. Over time, the total configuration space starts to look the same no matter how much you zoom in or out. In other words, the universe becomes more and more dynamically scale invariant.

Note that entropy does not increase uniformity. It actually does the opposite and decreases uniformity across the board so that the discrepancy between local and global uniformity disappears. Close to heat death (maximum theoretical entropy), no two local patches in the configuration space will look like each other. (They will be random in different ways.)

Side Note: Due to the statistical nature of the second law of thermodynamics, universe will keep experiencing fluctuations to the very end. It can get arbitrarily close to heat death but will never actually reach it. Complete heat death means end of physics altogether.

Now a natural question to ask is whether there could have been other ways of achieving scale invariance? The answer is no and the blocker is an information problem. You can not have complete knowledge about the global picture without infinite energy at your disposal and without this knowledge you can not define a local operator that can achieve scale invariance. For instance, going back to our initial example, if your region of the universe happens to have no zeros, you would not even be able to define an operator that takes zeros into consideration. All you can really do is to just ask every local patch to scatter everything so that (hopefully) whatever is out there will end up proportionally in every single patch. Of course, this is exactly what entropy itself does. (It is this random, zero knowledge mechanism which gives thermodynamics its acausal nature.)

Biology on the other hand creates low entropy islands by dumping entropy elsewhere and thereby works against the trend towards dynamical scale invariance. It is exactly in this sense that biology is anti-entropic. Entropy is not neutralized or cancelled, instead it is deflected through a series of brilliant jiu jitsu strokes so that it defeats its own goal.

Physics fights for dynamical scale invariance by breaking local uniformities in the configuration space and biology fights against dynamical scale invariance by creating local uniformities in the configuration space. This is the essence of the duality between physics and biology, but there is a slight caveat: Physics works on a global scale and hails down on all local uniformities in an indiscriminate manner, while biology begins in some local patches in a discriminate manner and slowly makes its way up to global scale, conquering physics from inside out, pushing entropy to the peripheries. (Biology needs to be discriminative since only certain locations are convenient to jumpstart life, and it needs to learn since - unlike physics - it does not have the privilege of starting global.)

Let us now scroll all the way to the end of time to see what this duality means for the fate of our universe.


3.3. Ultimate Fate of the Universe

There is no current scientific consensus about the ultimate fate of the universe. Some cosmologists believe in the inexhaustible expansion and the eventual heat death, some others believe in the unavoidable collapse and the subsequent bounce. Since nobody has any idea about how dark energy, dark matter and quantum gravity actually work, everything is basically up grabs.

Side Note: Dark energy is uniformly-distributed and non-interacting. It is posited to be the driving factor behind the acceleration of the uniform expansion of space. Dark matter on the other hand is non-uniformly-distributed and gravitationally-attractive. Together dark energy and dark matter make up around 95 percent of the total energy content of the universe. Hence the reason why some people call junk DNA, which make up 98 percent of human genome, as the dark sector of DNA. Funnily enough, in a similar fashion, more than 90 percent of the more evolved (white matter) part of the human brain is composed of non-neuron (glial) cells . (Neurons in the white matter, as opposed to those in the gray matter, are myelinated and therefore conduct electricity at a much higher speed.) It seems like the degree of complexity of an evolving system is directly correlated with the degree of dominance of the modulator (e.g. non-neuron cells, non-coding DNA) against the modulated (e.g. neurons cells, coding DNA). Could the prevalence of the dark sector be interpreted as an evidence that physics itself is undergoing evolution? (Note that, in all cases, the scientific discovery of the modulator occurred quite late and with a great deal of astonishment. Whenever we see a variation exhibiting substructure, we should immediately suspect that it is modulated by its complement.)

One thing that is conspicuously left out of these discussions is life itself. Everyone basically assumes that entropy will eventually win. After all even supermassive black holes will inevitably evaporate due to Hawking radiation. Who would give a chance to a phenomenon (like life) that is close to non-existent at the grand cosmological scales?

Well, I am actually super optimistic about the future of life. It is hard not to be so after one studies (in complete awe) how far evolution has progressed in just a few billion years. Life is learning at a phenomenal speed and will figure out (before it gets too late) how to do cosmic-scale engineering.

Since no one really knows anything about the dynamics of a cosmic bounce (and how it interacts with thermodynamics), let us finish this long blog post with some fun speculations:

  • The never ending war between physics and biology may be the reason why time still exists and the universe still keeps on managing to collapse on itself while also averting a heat death. Life could have learned how to engineer an early collapse before a heat death or how to prevent a heat death long enough for a collapse. Life could have even learned how to leave a local fine-tuned low-entropy quantum imprint so that it is guaranteed to reemerge after the big bounce.

  • What if life always reaches total control in the sense of Section 1 in each one of the cosmic cycles and becomes indistinguishable from its environment? Could the beginning state of this universe’s physics be the ending state of the previous universe’s biology? In other words, could our entire universe be an extremely advanced life form? Could this be the god described by Pantheists? Was Schopenhauer right in the sense that the most fundamental aspect of reality is its primordial will to live? Is the acausal nature of thermodynamics a form of pure volition?

states vs processes

We think of all dynamical situations as consisting of a space of states and a set of laws codifying how these states are weaved across time, and refer to the actual manifestation of these laws as processes.

Of course, one can argue whether it is sensical to split the reality into states and processes but so far it has been very fruitful to do so.


1. Interchangeability

1.1. Simplicity as Interchangeability of States and Processes

In mathematics, structures (i.e. persisting states) tend to be exactly whatever are preserved by transformations (i.e. processes). That is why Category Theory works, why you can study processes in lieu of states without losing information. (Think of continuous maps vs topological spaces) State and process centric perspectives each have their own practical benefits, but they are completely interchangeable in the sense that both Set Theory (state centric perspective) and Category Theory (process centric perspective) can be taken as the foundation of all of mathematics.

Physics is similar to mathematics. Studying laws is basically the same thing as studying properties. Properties are whatever are preserved by laws and can also be seen as whatever give rise to laws. (Think of electric charge vs electrodynamics) This observation may sound deep, but (as with any deep observation) is actually tautologous since we can study only what does not change through time and only what does not change through time allows us to study time itself. (Study of time is equivalent to study of laws.)

Couple of side-notes:

  • There are no intrinsic (as opposed to extrinsic) properties in physics since physics is an experimental subject and all experiments involve an interaction. (Even mass is an extrinsic property, manifesting itself only dynamically.) Now here is the question that gets to the heart of the above discussion: If there exists only extrinsic properties and nothing else, then what holds these properties? Nothing! This is basically the essence of Radical Ontic Structural Realism and exactly why states and processes are interchangeable in physics. There is no scaffolding.

  • You probably heard about the vast efforts and resources being poured into the validation of certain conjectural particles. Gauge theory tells us that the search for new particles is basically the same thing as the search for new symmetries which are of course nothing but processes.

  • Choi–Jamiołkowski isomorphism helps us translate between quantum states and quantum processes.

Long story short, at the foundational level, states and processes are two sides of the same coin.


1.2. Complexity as Non-Interchangeability of States and Processes

You understand that you are facing complexity exactly when you end up having to study the states themselves along with the processes. In other words, in complex subjects, the interchangeability of state and process centric perspectives start to no longer make any practical sense. (That is why stating a problem in the right manner matters a lot in complex subjects. Right statement is half the solution.)

For instance, in biology, bioinformatics studies states and computational biology studies processes. (Beware that the nomenclature in biology literature has not stabilized yet.) Similarly, in computer science, study of databases (i.e. states) and programs (i.e. processes) are completely different subjects. (You can view programs themselves as databases and study how to generate new programs out of programs. But then you are simply operating in one higher dimension. Philosophy does not change.)

There is actually a deep relation between biology and computer science (similar to the one between physics and mathematics) which was discussed in an older blog post.


2. Persistence

The search for signs of persistence can be seen as the fundamental goal of science. There are two extreme views in metaphysics on this subject:

  • Heraclitus says that the only thing that persists is change. (i.e. Time is real, space is not.)

  • Parmenides says that change is illusionary and that there is just one absolute static unity. (i.e. Space is real, time is not.)

The duality of these points of views were most eloquently pointed out by the physicist John Wheeler, who said "Explain time? Not without explaining existence. Explain existence? Not without explaining time".

Persistences are very important because they generate other persistencies. In other words, they are the building blocks of our reality. For instance, states in biology are complex simply because biology strives to resist change by building persistence upon persistence.


2.1. Invariances as State-Persistences

From a state perspective, the basic building blocks are invariances, namely whatever that do not change across processes.

Study of change involves an initial stage where we give names to substates. Then we observe how these substates change with respect to time. If a substate changes to the point where it no longer fits the definition of being A, we say that substate (i.e. object) A failed to survive. In this sense, study of survival is a subset of study of change. The only reason why they are not the same thing is because our definitions themselves are often imprecise. (From one moment to the next, we say that the river has survived although its constituents have changed etc.)

Of course, the ambiguity here is on purpose. Otherwise without any definiens, you do not have an academic field to speak of. In physics for instance, the definitions are extremely precise, and the study of survival and the study of change completely overlap. In a complex subject like biology, states are so rich that the definitions have to be ambiguous. (You can only simulate the biological states in a formal language, not state a particular biological state. Hence the reason why computer science is a better fit for biology than mathematics.)


2.2. Cycles as Process-Persistences

Processes become state-like when they enter into cyclic behavior. That is why recurrence is so prevalent in science, especially in biology.

As an anticipatory affair, biology prefers regularities and predictabilities. Cycles are very reliable in this sense: They can be built on top of each other, and harnessed to record information about the past and to carry information to the future. (Even behaviorally we exploit this fact: It is easier to construct new habits by attaching them to old habits.) Life, in its essence, is just a perpetuation of a network of interacting ecological and chemical cycles, all of which can be traced back to the grand astronomical cycles.

Prior studies have reported that 15% of expressed genes show a circadian expression pattern in association with a specific function. A series of experimental and computational studies of gene expression in various murine tissues has led us to a different conclusion. By applying a new analysis strategy and a number of alternative algorithms, we identify baseline oscillation in almost 100% of all genes. While the phase and amplitude of oscillation vary between different tissues, circadian oscillation remains a fundamental property of every gene. Reanalysis of previously published data also reveals a greater number of oscillating genes than was previously reported. This suggests that circadian oscillation is a universal property of all mammalian genes, although phase and amplitude of oscillation are tissue-specific and remain associated with a gene’s function. (Source)

A cyclic process traces out what is called an orbital which are like invariances that are smeared across time. An invariance is a substate preserved by a process, namely a portion of a state that is mapped identically to itself. An orbital too is mapped to itself by the cyclic process, but it is not identically done so. (Each orbital point moves forward in time to another orbital point and eventually ends up at its initial position.) Hence orbitals and process-persistency can be viewed respectively as generalizations of invariances and state-persistency.


3. Information

In practice, we do not have perfect knowledge of the states nor the processes. Since we can not move both feet at the same time, in our quest to understand nature, we assume that we have perfect knowledge of either the states or the processes.

  • Assumption: Perfect knowledge of all the actual processes but imperfect knowledge of the state
    Goal: Dissect the state into explainable and unexplainable parts
    Expectation: State is expected to be partially unexplainable due to experimental constraints on measuring states.

  • Assumption: Perfect knowledge of a state but no knowledge of the actual processes
    Goal: Find the actual (minimal) process that generated the state from the library of all possible processes.
    Expectation: State is expected to be completely explainable due to perfect knowledge about the state and the unbounded freedom in finding the generating process.

The reason why I highlighted expectations here is because it is quite interesting how our psychological stance against the unexplainable (which is almost always - in our typical dismissive tone - referred to as noise) differs in each case.

  • In the presence of perfect knowledge about the processes, we interpret the noisy parts of states as absence of information.

  • In the absence of perfect knowledge about the processes, we interpret the noisy parts of states as presence of information.

The flip side of the above statements is that, in our quest to understand nature, we use the word information in two opposite senses.

  • Information is what is explainable.

  • Information is what is inexplainable.


3.1 Information as the Explainable

In this case, noise is the ideal left-over product after everything else is explained away, and is considered normal and expected. (We even gave the name “normal” to the most commonly encountered noise distribution.)

This point of view is statistical and is best exemplified by the field of statistical mechanics where massive micro-degrees freedom can be safely ignored due to their random nature and canned into highly regular noise distributions.


3.2. Information as the Inexplainable

In this case, noise is the only thing that can not be compressed further or explained away. It is surprising and unnerving. In computer speak, one would say “It is not a bug, it is a feature.”

This point of view is algorithmic and is best exemplified by the field of algorithmic complexity which looks at the notion of complexity from a process centric perspective.

thoughts on abstraction

Why is it always the case that formulation of deeper physics require more abstract mathematics? Why does understanding get better as it zooms out?

Side Note: Notice that there are two ways of zooming out. First, you can abstract by ignoring details. This is actually great for applications, but not good for understanding. It operates more like chunking, coarse-graining, forming equivalence classes etc. You end up sacrificing accuracy for the sake of practicality. Second, you can abstract in the sense of finding an underlying structure that allows you to see two phenomena as different manifestations of the same phenomenon. This is actually the meaning that we will be using throughout the blogpost. While coarse graining is easy, discovering an underlying structure is hard. You need to understand the specificity of a phenomenon which you normally consider to be general.

For instance, a lot of people are unsatisfied with the current formulation of quantum physics, blaming it for being too instrumental. Yes, the math is powerful. Yes, the predictions turn out to be correct. But the mathematical machinery (function spaces etc.) feels alien, even after one gets used to it over time. Or compare the down-to-earth Feynman diagrams with the amplituhedron theory... Again, you have a case where a stronger and more abstract beast is posited to dethrone a multitude of earthlings.

Is the alienness a price we have to pay for digging deeper? The answer is unfortunately yes. But this should not be surprising at all:

  • We should not expect to be able to explain deeper physics (which is so removed from our daily lives) using basic mathematics inspired from mundane physical phenomena. Abstraction gives us the necessary elbow room to explore realities that are far-removed from our daily lives.

  • You can use the abstract to can explain the specific but you can not proceed the other way around. Hence as you understand more, you inevitably need to go higher up in abstraction. For instance, you may hope that a concept as simple as the notion of division algebra will be powerful enough to explain all of physics, but you will sooner or later be gravely disappointed. There is probably a deeper truth lurking behind such a concrete pattern.



Abstraction as Compression

The simplicities of natural laws arise through the complexities of the languages we use for their expression.

- Eugene Wigner

That the simplest theory is best, means that we should pick the smallest program that explains a given set of data. Furthermore, if the theory is the same size as the data, then it is useless, because there is always a theory that is the same size as the data that it explains. In other words, a theory must be a compression of the data, and the greater the compression, the better the theory. Explanations are compressions, comprehension is compression!

Chaitin - Metaphysics, Metamathematics and Metabiology

We can not encode more without going more abstract. This is a fundamental feature of the human brain. Either you have complex patterns based on basic math or you have simple patterns based on abstract math. In other words, complexity is either apparent or hidden, never gotten rid of. (i.e. There is no loss of information.) By replacing one source of cognitive strain (complexity) with another source of cognitive strain (abstraction), we can lift our analysis to higher-level complexities.

In this sense, progress in physics is destined to be of an unsatisfactory nature. Our theories will keep getting more abstract (and difficult) at each successive information compression. 

Don't think of this as a human tragedy though! Even machines will need abstract mathematics to understand deeper physics, because they too will be working under resource constraints. No matter how much more energy and resources you summon, the task of simulating a faithful copy of the universe will always require more.

As Bransford points out, people rarely remember written or spoken material word for word. When asked to reproduce it, they resort to paraphrase, which suggests that they were able to store the meaning of the material rather than making a verbatim copy of each sentence in the mind. We forget the surface structure, but retain the abstract relationships contained in the deep structure.

Jeremy Campbell - Grammatical Man (Page 219)

Depending on context, category theoretical techniques can yield proofs shorter than set theoretical techniques can, and vice versa. Hence, a machine that can sense when to switch between these two languages can probe the vast space of all true theories faster. Of course, you will need human aide (enhanced with machine learning algorithms) to discern which theories are interesting and which are not.

Abstraction is probably used by our minds as well, allowing it to decrease the number of used neurons without sacrificing explanatory power.

Rolnick and Max Tegmark of the Massachusetts Institute of Technology proved that by increasing depth and decreasing width, you can perform the same functions with exponentially fewer neurons. They showed that if the situation you’re modeling has 100 input variables, you can get the same reliability using either 2100 neurons in one layer or just 210 neurons spread over two layers. They found that there is power in taking small pieces and combining them at greater levels of abstraction instead of attempting to capture all levels of abstraction at once.

“The notion of depth in a neural network is linked to the idea that you can express something complicated by doing many simple things in sequence,” Rolnick said. “It’s like an assembly line.”

- Foundations Built for a General Theory of Neural Networks (Kevin Hartnett)

In a way, the success of neural network models with increased depth reflect the hierarchical aspects of the phenomena themselves. We end up mirroring nature more closely as we try to economize our models.


Abstraction as Unlearning

Abstraction is not hard because of technical reasons. (On the contrary, abstract things are easier to manipulate due to their greater simplicities.) It is hard because it involves unlearning. (That is why people who are better at forgetting are also better at abstracting.)

Side Note: Originality of the generalist is artistic in nature and lies in the intuition of the right definitions. Originality of the specialist is technical in nature and lies in the invention of the right proof techniques.

Globally, unlearning can be viewed as the Herculean struggle to go back to the tabula rasa state of a beginner's mind. (In some sense, what takes a baby a few months to learn takes humanity hundreds of years to unlearn.) We discard one by one what has been useful in manipulating the world in favor of getting closer to the truth.

Here are some beautiful observations of a physicist about the cognitive development of his own child:

My 2-year old’s insight into quantum gravity. If relative realism is right then ‘physical reality’ is what we experience as a consequence of looking at the world in a certain way, probing deeper and deeper into more and more general theories of physics as we have done historically (arriving by now at two great theories, quantum and gravity) should be a matter of letting go of more and more assumptions about the physical world until we arrive at the most general theory possible. If so then we should also be able to study a single baby, born surely with very little by way of assumptions about physics, and see where and why each assumption is taken on. Although Piaget has itemized many key steps in child development, his analysis is surely not about the fundamental steps at the foundation of theoretical physics. Instead, I can only offer my own anecdotal observations.

Age 11 months: loves to empty a container, as soon as empty fills it, as soon as full empties it. This is the basic mechanism of waves (two competing urges out of phase leading to oscillation).

Age 12-17 months: puts something in drawer, closes it, opens it to see if it is still there. Does not assume it would still be there. This is a quantum way of thinking. It’s only after repeatedly finding it there that she eventually grows to accept classical logic as a useful shortcut (as it is in this situation).

Age 19 months: comes home every day with mother, waves up to dad cooking in the kitchen from the yard. One day dad is carrying her. Still points up to kitchen saying ‘daddy up there in the kitchen’. Dad says no, daddy is here. She says ‘another daddy’ and is quite content with that. Another occasion, her aunt Sarah sits in front of her and talks to her on my mobile. When asked, Juliette declares the person speaking to her ‘another auntie Sarah’. This means that at this age Juliette’s logic is still quantum logic in which someone can happily be in two places at the same time.

Age 15 months (until the present): completely unwilling to shortcut a lego construction by reusing a group of blocks, insists on taking the bits fully apart and then building from scratch. Likewise always insists to read a book from its very first page (including all the front matter). I see this as part of her taking a creative control over her world.

Age 20-22 months: very able to express herself in the third person ‘Juliette is holding a spoon’ but finds it very hard to learn about pronouns especially ‘I’. Masters ‘my’ first and but overuses it ‘my do it’. Takes a long time to master ‘I’ and ‘you’ correctly. This shows that an absolute coordinate-invariant world view is much more natural than a relative one based on coordinate system in which ‘I’ and ‘you’ change meaning depending on who is speaking. This is the key insight of General Relativity that coordinates depend on a coordinate system and carry no meaning of themselves, but they nevertheless refer to an absolute geometry independent of the coordinate system. Actually, once you get used to the absolute reference ‘Juliette is doing this, dad wants to do that etc’ it’s actually much more natural than the confusing ‘I’ and ‘you’ and as a parent I carried on using it far past the time that I needed to. In the same way it’s actually much easier to do and teach differential geometry in absolute coordinate-free terms than the way taught in most physics books.

Age 24 months: until this age she did not understand the concept of time. At least it was impossible to do a bargain with her like ‘if you do this now, we will go to the playground tomorrow’ (but you could bargain with something immediate). She understood ‘later’ as ‘now’.

Age 29 months: quite able to draw a minor squiggle on a bit of paper and say ‘look a face’ and then run with that in her game-play. In other words, very capable of abstract substitutions and accepting definitions as per pure mathematics. At the same time pedantic, does not accept metaphor (‘you are a lion’ elicits ‘no, I’m me’) but is fine with similie, ‘is like’, ‘is pretending to be’.

Age 31 months: understands letters and the concept of a word as a line of letters but sometimes refuses to read them from left to right, insisting on the other way. Also, for a time after one such occasion insisted on having her books read from last page back, turning back as the ‘next page’. I interpret this as her natural awareness of parity and her right to demand to do it her own way.

Age 33 months (current): Still totally blank on ‘why’ questions, does not understand this concept. ‘How’ and ‘what’ are no problem. Presumably this is because in childhood the focus is on building up a strong perception of reality, taking on assumptions without question and as quickly as possible, as it were drinking in the world.

... and just in the last few days: remarked ‘oh, going up’ for the deceleration at the end of going down in an elevator, ‘down and a little bit up’ as she explained. And pulling out of my parking spot insisted that ‘the other cars are going away’. Neither observation was prompted in any way. This tells me that relativity can be taught at preschool.

- Algebraic Approach to Quantum Gravity I: Relative Realism (S. Majid)


Abstraction for Survival

The idea, according to research in Psychology of Aesthetics, Creativity, and the Arts, is that thinking about the future encourages people to think more abstractly—presumably becoming more receptive to non-representational art.

- How to Choose Wisely (Tom Vanderbilt)

Why do some people (like me) get deeply attracted to abstract subjects (like Category Theory)?

One of the reasons could be related to the point made above. Abstract things have higher chances of survival and staying relevant because they are less likely to be affected by the changes unfolding through time. (Similarly, in the words of Morgan Housel, "the further back in history you look, the more general your takeaways should be.") Hence, if you have an hunger for timelessness or a worry about being outdated, then you will be naturally inclined to move up the abstraction chain. (No wonder why I am also obsessed with the notion of time.)

Side Note: The more abstract the subject, the less community around it is willing to let you attach your name to your new discoveries. Why? Because the half-life of discoveries at higher levels of abstraction is much longer and therefore your name will live on for a much longer period of time. (i.e. It makes sense to be prudent.) After being trained in mathematics for so many years, I was shocked to see how easily researchers in other fields could “arrogantly” attach their names to basic findings. Later I realized that this behavior was not out of arrogance. These fields were so far away from truth (i.e. operating at very low levels of abstraction) that half-life of discoveries were very short. If you wanted to attach your name to a discovery, mathematics had a high-risk-high-return pay-off structure while these other fields had a low-risk-low-return structure.

But the higher you move up in the abstraction chain, the harder it becomes for you to innovate usefully. There is less room to play around since the objects of study have much fewer properties. Most of the meaningful ideas have already been fleshed out by others who came before you.

In other words, in the realm of ideas, abstraction acts as a lever between probability of longevity and probability of success. If you aim for a higher probability of longevity, then you need to accept the lower probability of success.

That is why abstract subjects are unsuitable for university environments. The pressure of "publish or perish" mentality pushes PhD students towards quick and riskless incremental research. Abstract subjects on the other hand require risky innovative research which may take a long time to unfold and result in nothing publishable.

Now you may be wondering whether the discussion in the previous section is in conflict with the discussion here. How can abstraction be both a process of unlearning and a means for survival? Is not the evolutionary purpose of learning to increase the probability of survival? I would say that it all depends on your time horizon. To survive the immediate future, you need to learn how your local environment operates and truth is not your primary concern. But as your time horizon expands into infinity, what is useful and what is true become indistinguishable, as your environment shuffles through all allowed possibilities.

data as mass

The strange thing about data is that they are an inexhaustible resource: the more you have, the more you get. More information lets firms develop better services, which attracts more users, which in turn generate more data. Having a lot of data helps those firms expand into new areas, as Facebook is now trying to do with online dating. Online platforms can use their wealth of data to spot potential rivals early and take pre-emptive action or buy them up. So big piles of data can become a barrier to competitors entering the market, says Maurice Stucke of the University of Tennessee.

The Economist - A New School in Chicago

Greater centralization of internet and growing importance of data are basically two sides of the same coin. Data is like mass and therefore is subject to gravitation-like dynamics. Huge data-driven companies like Google and Facebook can be thought of as mature galaxy formations.

Light rays travel through fiber optic cables to transfer data and cruise across vast intergalactic voids to stitch together a causally integrated universe.

biology as computation

If the 20th century was the century of physics, the 21st century will be the century of biology. While combustion, electricity and nuclear power defined scientific advance in the last century, the new biology of genome research - which will provide the complete genetic blueprint of a species, including the human species - will define the next.

Craig Venter & Daniel Cohen - The Century of Biology

It took 15 years for technology to catch up with this audacious vision that was articulated in 2004. Investors who followed the pioneers got severely burned by the first hype cycle, just like those who got wiped out by the dot-com bubble.

But now the real cycle is kicking in. Cost of sequencing, storing and analyzing genomes dropped dramatically. Nations are finally initiating population wide genetics studies to jump-start their local genomic research programs. Regulatory bodies are embracing the new paradigm, changing their standards, approving new gene therapies, curating large public datasets and breaking data silos. Pharmaceutical companies and new biotech startups are flocking in droves to grab a piece of the action. Terminal patients are finding new hope in precision medicine. Consumers are getting accustomed to clinical genomic diagnostics. Popular culture is picking up as well. Our imagination is being rekindled. Skepticism from the first bust is wearing off as more and more success stories pile up.

There is something much deeper going on too. It is difficult to articulate but let me give a try.

Mathematics did a tremendous job at explaining physical phenomena. It did so well that all other academic disciplines are still burning with physics envy. As the dust settled and our understanding of physics got increasingly more abstract, we realized something more, something that is downright crazy: Physics seems to be just mathematics and nothing else. (This merits further elaboration of course, but I will refrain from doing so.)

What about biology? Mathematics could not even scratch its surface. Computer science on the other hand proved to be wondrously useful, especially after our data storage and analytics capabilities passed a certain threshold.

Although currently a gigantic subject on its own, at its foundations, computer science is nothing but constructive mathematics with space and time constraints. Note that one can not even formulate a well-defined notion of complexity without such constraints. For physics, complexity is a bug, not a feature, but for biology it is the most fundamental feature. Hence it is not a surprise that mathematics is so useless at explaining biological phenomena. 

The fact that analogies between computer science and biology are piling up gives me the feeling that we will soon (within this century) realize that biology and computer science are really just the same subject.

This may sound outrageous today but that is primarily because computer science is still such a young subject. Just like physics converged to mathematics overtime, computer science will converge to biology. (Younger subject converges to the older subject. That is why you should always pay attention when a master of the older subject has something to say about the younger converging subject.)

The breakthrough moment will happen when computer scientists become capable of exploiting the physicality of information itself, just like biology does. After all hardware is just frozen software and information itself is something physical that can change shape and exhibit structural functionalities. Today we freeze because we do not have any other means of control. In the future, we will learn how to exert geometric control and thereby push evolution into a new phase that exhibits even more teleological tendencies.

A visualization of the AlexNet deep neural network by Graphcore

A visualization of the AlexNet deep neural network by Graphcore


If physics is mathematics and biology is computer science, what is chemistry then?

Chemistry seems to be an ugly chimera. It can be thought of as the study of either complicated physical states or failed biological states. (Hat tip to Deniz Kural for the latter suggestion.) In other words, it is the collection of all the degenerate in-between phenomena. Perhaps this is the reason why it does not offer any deep insights, while physics and biology are philosophically so rich.