politics, economics and naturality

Combination of liberalism and capitalism forms a nice balance. Former fights against nature in the political domain and destroys outliers by eliminating actual differences in a discrete fashion. Latter fights for nature in the economic domain and creates outliers by amplifying potential differences in a continuous fashion. (Fighting against nature results in discrete as opposed to continuous change.)

 
Politics Economics Naturality.png
 

Similarly, combination of conservatism (fighting for nature in the political domain) and communism (fighting against nature in the economic domain) forms a nice balance as well. But both combinations are hard to maintain due to their conflictual nature.

necessity of dualities

All truths lie between two opposite positions. All dramas unfold between two opposing forces. Dualities are both ubiquitous and fundamental. They shape both our mental and physical worlds.

Here are some examples:

Mental

objective | subjective
rational | emotional
conscious | unconscious
reductive | inductive
absolute | relative
positive | negative
good | evil
beautiful | ugly
masculine | feminine


Physical

deterministic | indeterministic
continuous | discrete
actual | potential
necessary | contingent
inside | outside
infinite | finite
global | local
stable | unstable
reversible | irreversible

Notice that even the above split between the two groups itself is an example of duality.

These dualities arise as an epistemological byproduct of the method of analytical inquiry. That is why they are so thoroughly infused into the languages we use to describe the world around us.

Each relatum constitutive of dipolar conceptual pairs is always contextualized by both the other relatum and the relation as a whole, such that neither the relata (the parts) nor the relation (the whole) can be adequately or meaningfully defined apart from their mutual reference. It is impossible, therefore, to conceptualize one principle in a dipolar pair in abstraction from its counterpart principle. Neither principle can be conceived as "more fundamental than," or "wholly derivative of" the other.

Mutually implicative fundamental principles always find their exemplification in both the conceptual and physical features of experience. One cannot, for example, define either positive or negative numbers apart from their mutual implication; nor can one characterize either pole of a magnet without necessary reference to both its counterpart and the two poles in relation - i.e. the magnet itself. Without this double reference, neither the definiendum nor the definiens relative to the definition of either pole can adequately signify its meaning; neither pole can be understood in complete abstraction from the other.

- Epperson & Zafiris - Foundations of Relational Realism (Page 4)


Various lines of Eastern religious and philosophical thinkers intuited how languages can hide underlying unity by artificially superimposing conceptual dualities (the primary of which is the almighty object-subject duality) and posited the nondual wholesomeness of nature several thousand years before the advent of quantum mechanics. (The analytical route to enlightenment is always longer than the intuitive route.)

Western philosophy on the other hand

  • ignored the mutually implicative nature of all dualities and denied the inaccessibility of wholesomeness of nature to analytical inquiry.

  • got fooled by the precision of mathematics which is after all just another language invented by human beings.

  • confused partial control with understanding and engineering success with ontological precision. (Understanding is a binary parameter, meaning that either you understand something or you do not. Control on the other hand is a continuous parameter, meaning that you can have partial control over something.)

As a result Western philosophers mistook representation as reality and tried to confine truth to one end of each dualism in order to create a unity of representation matching the unity of reality.

Side Note: Hegel was an exception. Like Buddha, he too saw dualities as artificial byproducts of analysis, but unlike him, he suggested that one should transcend them via synthesis. In other words, for Buddha unity resided below and for Hegel unity resided above. (Buddha wanted to peel away complexity to its simplest core, while Hegel wanted to embrace complexity in its entirety.) While Buddha stopped theorizing and started meditating instead, Hegel saw the salvation through higher levels of abstraction via alternating chains of analyses and syntheses. (Buddha wanted to turn off cognition altogether, while Hegel wanted to turn up cognition full-blast.) Perhaps at the end of the day they were both preaching the same thing. After all, at the highest level of abstraction, thinking probably halts and emptiness reigns.

It was first the social thinkers who woke up and revolted against the grand narratives built on such discriminative pursuits of unity. There was just way too much politically and ethically at stake for them. The result was an overreaction, replacing unity with multiplicity and considering all points of views as valid. In other words, the pendulum swung the other way and Western philosophy jumped from one state of deep confusion into another. In fact, this time around the situation was even worse since there was an accompanying deep sense of insecurity as well.

The cacophony spread into hard sciences like physics too. Grand narrations got abandoned in favor of instrumental pragmatism. Generations of new physicists got raised as technicians who basically had no clue about the foundations of their disciplines. The most prominent of them could even publicly make an incredibly naive claim such as “something can spontaneously arise from nothing through a quantum fluctuation” and position it as a non-philosophical and non-religious alternative to existing creation myths.

Just to be clear, I am not trying to argue here in favor of Eastern holistic philosophies over Western analytic philosophies. I am just saying that the analytic approach necessitates us to embrace dualities as two-sided entities, including the duality between holistic and analytic approaches.


Politics experienced a similar swing from conservatism (which hailed unity) towards liberalism (which hailed multiplicity). During this transition, all dualities and boundaries got dissolved in the name of more inclusion and equality. The everlasting dynamism (and the subsequent wisdom) of dipolar conceptual pairs (think of magnetic poles) got killed off in favor of an unsustainable burst in the number of ontologies.

Ironically, liberalism resulted in more sameness in the long run. For instance, the traditional assignment of roles and division of tasks between father and mother got replaced by equal parenting principles applied by genderless parents. Of course, upon the dissolution of the gender dipolarity, the number of parents one can have became flexible as well. Having one parent became as natural as having two, three or four. In other words, parenting became a community affair in its truest sense.

 
Duality.png
 

The even greater irony was that liberalism itself forgot that it represented one extreme end of another duality. It was in a sense a self-defeating doctrine that aimed to destroy all discriminative pursuits of unity except for that of itself. (The only way to “resolve” this paradox is to introduce a conceptual hierarchy among dualities where the higher ones can be used to destroy the lower ones, in a fashion that is similar to how mathematicians deal with Russell’s paradox in set theory.)


Of course, at some point the pendulum will swing back to pursuit of unity again. But while we swing back and forth between unity and multiplicity, we keep skipping the only sources of representational truths, namely the dualities themselves. For some reason we are extremely uncomfortable with the fact that the world can only be represented via mutually implicative principles. We find “one” and “infinity” tolerable but “two” arbitrary and therefore abhorring. (Prevalence of “two” in mathematics and “three” in physics was mentioned in a previous blog post.)

I am personally obsessed with “two”. I look out for dualities everywhere and share the interesting finds here on my blog. In fact, I go even further and try to build my entire life on dualities whose two ends mutually enhance each other every time I visit them.

We should not collapse dualities into unities for the sake of satisfying our sense of belonging. We need to counteract this dangerous sociological tendency using our common sense at the individual level. Choosing one side and joining the groupthink is the easy way out. We should instead strive to carve out our identities by consciously sampling from both sides. In other words, when it comes to complex matters, we should embrace the dualities as a whole and not let them split us apart. (Remember, if something works very well, its dual should also work very well. However, if something is true, its dual has to be wrong. This is exactly what separates theory from reality.)

Of course, it is easy to talk about these matters, but who said that pursuit of truth would be easy?

Perhaps there is no pursuit to speak of unless one is pre-committed to choose a side, and swinging back and forth between the two ends of a dualism is the only way nature can maintain its neutrality without sacrificing its dynamicity? (After all, there is no current without a polarity in the first place.)

Perhaps we should just model our logic after reality (like Hegel wanted to) and rather than expect reality to conform to our logic? (In this way we can have our cake and eat it too!)

formalism, consciousness and understanding

In a formal (deductive) subject, the level of competency correlates with the depth of non-formalism one can display around the subject. (For instance, the mastery of a mathematician can only be gauged when he stops scribbling down mathematical notation, dives into conceptual vagueness and starts using real words.) In a non-formal (intuitive) subject, the level of competency correlates with the depth of formalism one can display around the subject.

Similarly, one can only understand the unconscious things using the consciousness and the conscious things using the unconsciousness. Due to the architecture of our brains we typically find the latter much easier to do. Our education system does not balance the scale neither. (Practicing lucid dreaming, meditation and improvisation can help.) We generally do not know how to open up and let our non-verbal intuitive brain reign, and do not care about the unconscious until it breaks down.

classical vs innovative businesses

As you move away from zero-to-one processes, economic activities become more and more sensitive to macroeconomic dynamics.

Think of the economy as a universe. Innovative startups correspond to quantum mechanical phenomena rendering something from nothing. The rest of the economy works classically within the general relativity framework where everything is tightly bound to everything else. To predict your future you need to predict the evolution of everything else as well. This of course is an extremely stressful thing to do. It is much easier to exist outside the tightly bound system and create something from scratch. For instance, you can build a productivity software that will help companies increase their profit margins. In some sense such a software will exist outside time. It will sell whether there is an economic downturn or an upturn.


In classical businesses, forecasting near future is extremely hard. Noise clears out when you look a little further out into the future. But far future is again quite hard to talk about since you start feeling the long term effects of innovation being made today. So difficulty hierarchy looks as follows:

near future > far future > mid future

In innovative businesses, forecasting near future is quite easy. In the long run, everyone agrees that transformation is inevitable. So forecasting far future is hard but still possible. However what is going to happen in mid term is extremely hard to predict. In other words, the above hierarchy gets flipped:

mid future > far future > near future

Notice that what is mid future is actually quite hard to define. It can move around with the wind, so to speak, just as intended by the goddesses of fate in Greek mythology.

In Greek mythology the Moirae were the three Fates, usually depicted as dour spinsters. One Moira spun the thread of a newborn's life. The other Moira counted out the thread’s length. And the third Moira cut the thread at death. A person’s beginning and end were predetermined. But what happened in between was not inevitable. Humans and gods could work within the confines of one's ultimate destiny.

Kevin Kelly - What Technology Wants

I personally find it much more natural to just hold onto near future and far future, and let the middle inflection point dangle around. In other words I prefer working with innovative businesses.

Middle zones are generally speaking always ill-defined, presenting another high level justification for the barbell strategy popularized by Nassim Nicholas Taleb. Mid-term behavior of complex systems is tough to crack. For instance, short-term weather forecasts are highly accurate and long-term climate changes are also quite foreseeable, but what is going to happen in mid-term is anybody’s guess.

Far future always involves “structural” change. Things will definitely change but the change is not of statistical nature. As mentioned earlier, innovative businesses are not affected by the short term statistical (environmental / macro economic) noise. Instead they suffer from mid term statistical noise of the type that phase-transition states exhibit in physics. (Think of turbulence phenomenon.) So the above two difficulty hierarchies can be seen as particular manifestations of the following master hierarchy:

statistical unpredictability > structural unpredictability > predictability


Potential entrepreneurs jumping straight into tech without building any experience in traditional domains are akin to physics students jumping straight into quantum mechanics without learning classical mechanics first. This jump is possible, but also pedagogically problematic. It is much more natural to learn things in the historical order that they were discovered. (Venture capital is a very recent phenomenon.) Understanding the idiosyncrasies and complexities of innovative businesses requires knowledge of how the usual, classical businesses operate.

Moreover, just like quantum states decohere into classical states, innovative businesses behave more and more like classical businesses as they get older and bigger. The word “classical” just means the “new” that has passed the test of time. Similarly, decoherence happens via entanglements, which is basically how time progresses at quantum level.

By the way, this transition is very interesting from an intellectual point of view. For instance, innovative businesses are valued using a revenue multiple, while classical businesses are valued using a profit multiple. When exactly do we start to value a mature innovative business using a profit multiple? How can we tell apart its maturity? When exactly a blue ocean becomes a red one? With the first blood spilled by the death of competitors? Is that an objective measure? After all, it is the investor’s expectations themselves which sustain innovative businesses who burn tons of cash all the time.

Also, notice that, just as all classical businesses were once innovative businesses, all innovative businesses are built upon the stable foundations provided by classical businesses. So we should not think of the relationship as one way. Quantum may become classical, but quantum states are always prepared by classical actors in the first place.


What happens to classical businesses as they get older and bigger? They either evolve or die. Combining this observation with the conclusions of the previous two sections, we deduce that the combined predictability-type timeline of an innovative business becoming a classical one looks as follows:

1
(Innovative) Near Future
Predictability

2
(Innovative) Mid Future
Statistical Unpredictability
(Buckle up. You are about to go through some serious turbulence!)

3
(Innovative) Far Future
Structural Unpredictability
(Congratulations! You successfully landed. Older guys need to evolve or die.)

4
(Classical) Near Future
Statistical Unpredictability
(Wear your suit. There seems to be radiation everywhere on this planet!)

5
(Classical) Mid Future
Predictability

6
(Classical) Far Future
Structural Unpredictability
(New forms of competition landed. You are outdated. Will you evolve or die?)

Notice the alteration between structural and statistical forms of unpredictability over time. Is it coincidental?


Industrial firms thrive on reducing variation (manufacturing errors); creative firms thrive on increasing variation (innovation).
- Patty McCord - How Netflix Reinvented HR

Here Patty’s observation is in line with our analogy. He is basically restating the disparity between the deterministic nature of classical mechanics and the statistical nature of quantum mechanics.

Employees in classical businesses feel like cogs in the wheel, because what needs to be done is already known with great precision and there is nothing preventing the operations to be run with utmost efficiency and predictability. They are (again just like cogs in the wheel) utterly dispensable and replaceable. (Operating in red oceans, these businesses primarily focus on cost minimization rather than revenue maximization.)

Employees in innovative businesses, on the other hand, are given a lot more space to maneuver because they are the driving force behind an evolutionary product-market fit process that is not yet complete (and in some cases will never be complete).


Investment pitches too have quite opposite dynamics for innovative and classical businesses.

  • Innovative businesses raise money from venture capital investors, while classical businesses raise money from private equity investors who belong to a completely different culture.

  • If an entrepreneur prepares a 10 megabyte Excel document for a venture capital, then he will be perceived as delusional and naive. If he does not do the same for a private equity, then he will be perceived as entitled and preposterous.

  • Private equity investors look at data about the past and run statistical, blackbox models. Venture capital investors listen to stories about the future and think in causal, structural models. Remember, classical businesses are at the mercy of macroeconomy and a healthy macroeconomy displays maximum unpredictability. (All predictabilities are arbitraged away.) Whatever remnants of causal thinking left in private equity are mostly about fixing internal operational inefficiencies.

  • The number of reasons for rejecting a private equity investment is more or less equal to the number of reasons for accepting one. In the venture capital world, rejection reasons far outnumber the acceptance reasons.

  • Experienced venture capital investors do not prepare before a pitch. The reason is not that they have a mastery over the subject matter of the entrepreneur’s work, but that there are far too many subject-matter-independent reasons for not making an investment. Private equity investors on the other hand do not have this luxury. They need to be prepared before a pitch because the devil is in the details.

  • For the venture capital investors, it is very hard to tell which company will achieve phenomenal success, but very easy to spot which one will fail miserably. Private equity investors have the opposite problem. They look at companies that have survived for a long time. Hence future-miserable-failures are statistically rare and hard to tell apart.

  • In innovative businesses, founders are (and should be) irreplaceable. In classical businesses, founders are (and should be) replaceable. (Similarly, professionals can successfully turn around failing classical companies, but can never pivot failing innovative companies.)

  • Private equity investors with balls do not shy away from turn-around situations. Venture capital investors with balls do not shy away from pivot situations.

pharma vs diagnostics

Bioinformatics industry is bifurcating into the two categories defined by the two extreme-value generation endpoints, namely drug development and data creation.

  • Drugs come with patent protection and therefore create defensible sources of revenue. Data usually suffers from diminishing returns and data generation can not sustain value indefinitely, but this is not true for the case of biology which is (almost by definition) the most complex subject in the universe. (The fact that biological data seems to have a shorter half-life makes the situation even worse.)

  • Pharma companies develop the drugs and (the volume driven) diagnostics companies generate (the majority of) the data.

Pharma companies love to dip into data because it enables them to drive their precision medicine programs forward by enabling

  • the targeting of the right patient cohorts for existing drugs, and

  • the generation of novel drug targets.

Better precision medicine generates more knowledge about the genetic variants and more drugs targeting them, which in turn render diagnostics tests respectively more accurate and useful. In other words, more data eventually leads to an increase in the demand for diagnostics tests and therefore results in the generation of even more data. (This positive feedback cycle will greatly accelerate the maturation of the precision medicine paradigm in the near future.)

Pharma companies and diagnostics companies behave very differently (as summarized in the table below) and this creates a polarity in the product and business model configuration space for the bioinformatics industry whose primary customers (in the private domain) are these two types of companies.

Pharma vs Diagnostics.png

Last two lines are very important and worth explaining in greater detail:

  • Pharma companies do basic research and therefore want to tap into all types of data sets. (They also have a greater tendency use all types of analytical applications while diagnostic companies ignore the long tail.) These datasets are generally huge and may be residing in private cloud or some public cloud provider. So pharma companies have to be able to connect to all of these datasets and run computation-heavy analysis that seamlessly weave through them. (When you are dealing with big data, computation needs to go to the data rather than other way around.) In other words, they naturally belong to the multi-cloud paradigm. Diagnostics companies, on the other hand, belong to the cloud paradigm since they are optimizing cost and will just choose a single cloud provider based on price and convenience. (Read this older blog post to better understand the difference and polarity between the multi-cloud and cloud paradigms.)

  • Pharma companies are looking for help to solve their complex problems. Hence they are primarily focused on solutions. This pushes the software layer behind the services layer. In other words, software is still there but it is the service provider who is mostly using it. Diagnostics companies, on the other hand, focus on their unit economics. They do not need much consulting since they just optimize the hell out of their production pipelines and leave them alone for the most of the time.

thoughts on cybersecurity business

  • Cybersecurity and drug development are similar in the sense that neither can harbor deep, long-lived productification processes. Problems are dynamical. Enemies eventually evolve protection against productified attacks.

  • Cybersecurity and number theory are similar in the sense that they contain the hardest problems of their respective fields and are not built on a generally-agreed-upon core body of knowledge. Nothing meaningful is accessible to beginner-level students since all sorts of techniques from other subfields are utilized to crack problems.

Hence, in its essence, cyber security is an elite services business. Anyone else claiming the opposite (that it is a product company, that it does not necessitate the recruitment of the best minds of the industry) is selling a sense of security, not real security.

normalization for positioning, coping and filtering

Normalization is a statistical term used for adjusting your position with respect to the relevant population norm which can change across time or space. (For instance, curved grading used in academia employs this technique.)

Here I will use normalization as a unifying theme to make sense of some social, psychological and cognitive phenomena.

Spatial Normalization as a Social Positioning Mechanism

We generally think in relative terms when we compare ourselves to others. All status based social dynamics take place in this way. We are happy when we are richer than the person next door. It does not matter if we all get richer. Of course, this leads to absurd situations where people are constantly unhappy although everything is improving.

What is mathematically happening here is that we keep updating the norm (average) against which we make all comparisons. In social domains, this process takes place across space, not time. (i.e. You do not see people comparing themselves to historical norms. We all live more comfortable lives than the kings of the past, but no one gives a shit.)

Spatial normalization in sociology exhibits two interesting properties:

  • Two Dimensionality. People are curious about others’ lives for both vertical and horizontal reasons. They look (up and down) at the other castes and (around) at other individuals in their own caste. Precise social positioning requires both.

  • Locality. In both dimensions, practically unreachable positions get disregarded. (That is why greater social mobility actually brings unhappiness. Knowing that everything is possible but you are stuck with your current position hurts more.) In other words, social status is determined locally. This makes it actually easier for the poor to climb up in status. After all, due to the severely nonlinear nature of the wealth distribution, it is easier to reach the top of the bottom ten percent than to reach the top of the top ten percent. (That is why the rich is a miserable bunch.)

Temporal Normalization as a Psychological Coping Mechanism

Normalization occurs across time as well, in the form of adaptivity. After all, in order to survive, we have no choice but to adapt to new norms. It is pointless not to adapt to a change that you can not change. (This is usually given as an advice for achieving inner peace. Most of our frustrations come from our inability to discern what can not be changed and should therefore be adapted to.)

Due to one dimensionality of time, we do not have the first bullet point mentioned above for the temporal version of normalization. However, locality holds and is even more pronounced.

Example of Locality

[Cult leaders] deliberately induce distress - so that when they relieve it, they will also be the source of your pleasure. This leads to a powerful and, to outside observers, puzzling connection between cult leader and cult member. The same thing can be seen in abusive relationships and in ”Stockholm syndrome,” where crime victims fall in love with or become supportive of their captors.

Born for Love - Bruce D. Perry & Maia Szalavitz (Page 237)

Temporal locality of adaptation is actually what gets us stuck in abusive relations. We slowly get used to the bad treatment and normalize it. We forget that the world used to be much better before the relationship began. We become quite happy just because we are treated less badly.

Temporal Normalization as a Cognitive Filtering Mechanism

We focus on deviations from the norm while the norm itself gets pushed down to and tracked at an unconscious level. The effects of this focus become particularly stark when deviations become very small and we are essentially left with only the norm itself. Such constancy gets completely filtered away from our consciousness. (For interesting examples of this phenomena, check out this older blog post.)

Remember from our previous discussion that we do not compare ourselves to people who are too far away from us in social distance. (Thanks to the marketing people this is actually becoming increasingly more difficult.) Similarly, when we are cognitively keeping track of deviations, we do not go too far back in time. Our brains calculate the norm in a temporally local fashion, using only recent samplings. In other words, slow change is disregarded even if its accumulative effect may be quite large over time. (Think of the fable of the frog being slowly boiled alive.)

Example of Locality

We must forgive our memory for yet another reason. It finds it easier to determine what has changed than to tell what has stayed the same. The people we have around us every day change as quickly or slowly as everyone else, but thanks to our daily contacts with them their changes are played out on a scale that makes them seem to stand still. It is unfair to blame our memory for throwing away editions when, on the face of it, the latest imprint differs in no way from the preceding one.

Why Life Speeds Up As You Get Older - Draaisma (Page 131)

In some sense, we are wired to ignore the slow passage of time. In fact, this tendency gets worse as our brain ages and accumulates more patterns against which new norms can be defined, explaining why time seems to flow faster as we grow older.

evolution as a physical theory

Evolution has two ingredients, constant variation and constant selection.

Two important observations:

  1. Variation in biology exhibits itself in myriad forms, but they all can be traced back to the second law of thermodynamics, which says that entropy (on average) always increases over time. (It is not a coincidence that Darwin formulated the theory of natural selection in 1850s, around the same time Clausius formulated the second law.)

  2. If you decrease selection pressures, the fitness landscape expands. You see less people dying around you, but you also see more variety at any given time. As we learn to cure and cope with (physical and mental) disorders using advances in (hard and soft) sciences / extend our societal safety nets further / improve our parenting and teaching techniques, more and more people stay alive and functional to go on to mate and reproduce. Progress creates more elbow room for evolution so that it can try out even wilder combinations than before.

    Conversely, if you increase selection pressures, the fitness landscape contracts, but in return the shortened life cycles enable evolution to shuffle through the contracted landscape of possibilities at a higher speed.

    Hence, selection pressure acts like a lever between spatial variation and temporal variation. Decreasing it increases spatial variation and decreases temporal variation, increasing it decreases spatial variation and increases temporal variation.

These observations imply respectively the following:

  1. Evolution never stops since the second law of thermodynamics is always valid.

  2. Remember, Einstein discovered that space and time by themselves are not invariant, only spacetime as a whole is. Similarly, evolution may slow down or speed up in space or time dimensions, but is always a constant at spacetime level. In other words, the natural setting for evolution is spacetime.

It is not surprising that thermodynamics has so far stood out as the odd ball that can not be unified with the rest of physics. Principle of entropy seems to be only half the picture. It needs to be combined with the principle of selection to give rise to a spacetime invariant theory at the level of biological variations. In other words, evolution (i.e. principles of entropy and selection combined together) is more fundamental than thermodynamics from the point of view of physics.

Side Note: The trouble is that the principle of selection is a generative, computational notion and does not lend itself to a structural, mathematical definition. However the same can also be said for the principle of entropy, which looks quite awkward in its current mathematical forms. (Recall from the older post Biology as Computation that biology is primarily driven by computational notions.)

All of our theories in physics, except for thermodynamics, are time symmetric. (i.e. They can not distinguish the past from the future.) Second law of thermodynamics, on the other hand, states that entropy (on average) always increases over time and therefore can (obviously) detect the direction of time. This strange asymmetry actually disappears in the theory of evolution, where something emerges to counterbalance the increasing entropy, namely increasing control.

Side Note: Entropy is said to increase globally but control can only be exercised locally. In other words, control decreases entropy locally by dumping it elsewhere, just like a leaf blower. Of course, you may be wondering how, as finite localized beings, we can formulate any global laws at all. I share the same sentiment because, empirically speaking, we can not distinguish a sufficiently large local counterbalance from a global one. Whenever I talk about the entropy of the whole universe, please take it with a grain of salt. (Formally speaking, thermodynamics is not even defined for open systems. In other words, it can not be globally applied to universes with no peripheries.) We will dig deeper into the global vs local dichotomy in Section 3. (Strictly speaking, thermodynamics can not be applied locally neither since every system is bound to be somewhat open due to our inability to completely control its environment.)


1. Increasing Control

All living beings exploit untapped energy sources to exhibit control and influence the future course of their own evolution.

Any state that is not lowest-energy can be considered semi-stable at best. Eventually, by the second law of thermodynamics, every such state evolves towards the lowest-energy configuration and emits energy as a by-product. By “untapped energy sources” I mean such extractable pockets of energy.

So, put more succinctly, all living beings harness entropy to reduce entropy.

The accumulative effect of their efforts over long periods of time has so far been quite dramatic indeed: What basically started out as simple RNA-based structures floating uncontrollably in oceans eventually turned into human beings proposing geo-engineering solutions to the global climate problems they themselves have created.

Let us now look at two interesting internal examples.


1.1. Cognitive Example

Our brains continuously make predictions and proactively interpolate from sensory data flow. In fact, when the higher (more abstract) layers of our neural networks lose the ability to project information downwards and become solely information-receivers, we slip into a comatose state.

Our predictive mental models slowly decay due to entropy (That is why blind people gradually lose their abilities to dream.) and are also at constant risk of becoming irrelevant. To address these problems, our brains continuously reconstruct the models in the light of new triggers and revise them in the light of new evidence. If they did not exercise such self-control, we would be stuck in an echo chamber of slowly decaying mental creations of our own. (That is why schizophrenic people gradually lose touch with reality.)

Autism and schizophrenia can be interpreted as imbalances in this controlled hallucination mechanism and be thought of as inverses of each other, causing respectively too much control and too much hallucination:

Aspects of autism, for instance, might be characterized by an inability to ignore prediction errors relating to sensory signals at the lowest levels of the brain’s processing hierarchy. That could lead to a preoccupation with sensations, a need for repetition and predictability, sensitivity to certain illusions, and other effects. The reverse might be true in conditions that are associated with hallucinations, like schizophrenia: The brain may pay too much attention to its own predictions about what is going on and not enough to sensory information that contradicts those predictions.

Jordana Cepelewicz - To Make Sense of the Present, Brains May Predict the Future


1.2. Genomic Example

Since only 2 percent of our DNA actually codes for proteins, the remaining 98 percent was initially called “junk DNA” which later proved to be a wild misnomer. Today we know that this junk part performs myriad of interesting functions.

For instance, one thing it does for sure is to insulate the precious 2 percent from genetic drift by decreasing the probability of a mutation event to cause critical damage.

Side Note: It is amazing how evolution has managed to diminish the coding region down to 2 percent (without sacrificing any functionality) by getting more and more dexterous at exposing the right coding regions (for gene expression) at the right time. This has resulted in greater variability of gene expression rates across different cellular contexts.

Remember (from our previous remarks) that if you decrease selection pressure, spatial variation increases and temporal variation decreases. Nature achieves this feat via an important intermediary mechanism. To understand this mechanism, first observe the following:

  1. Ability to decrease selection pressure requires greater control over the environment and decreased selection pressure entails longer life span.

  2. Exerting greater control over the environment requires more complex beings.

  3. More complexity and longer life span entail respectively greater fragility towards and longer exposure-time to random mutation events.

  4. This increased susceptibility to randomness in turn necessitates more protective control over genomes.

Since an expansion in the fitness landscape is worthless unless you can roam around on it, greater control exerted at phenotypical level is useless without greater control exerted at genotypical level. In other words, as we channel the speed of evolution from the temporal to the spatial dimension, we need to drive more carefully to make it safely home. From this point of view, it is not surprising at all that the percentage of non-coding DNA of a species is generally correlated with its “complexity”.

I used quotation marks here since there is no generally-agreed-upon, well-defined notion of complexity in biology. But one thing we know for sure is that evolution generates more and more of it over time.


2. Increasing Complexity

Evolution is good at finding out efficient solutions but bad at simplification. As time passes by, both ecosystems and their participants become more complex.

Currently we (as human beings) are by far the greatest complexity generators in the universe. This sounds wildly anthropocentric of course, but when it comes to complexity, we are really the king of the universe.


2.1 Positive Feedback between Control and Complexity

Control and complexity are more or less two sides of the same coin. They always coexist because of the following strong positive feedback mechanism between them:

  • Greater control for you implies more selection pressure for everyone else. In other words, at the aggregate level, greater control increases selection pressure and thereby generates more complexity. (This observation is similar to saying that greater competition makes everyone stronger.)

  • How can you assert more control in an environment that has just become more complex? You need to increase your own complexity so that you can get a handle on things again. (This observation is similar to saying that human brain will never be intelligent enough to understand itself.)


2.2. Positive Feedback between Higher and Lower Complexity Levels

All ecological networks are stratified into several levels:

  • Internally speaking, each human being is an ecology onto himself, consisting of ten of trillions of cells, coexisting with equally many cells in human bacterial flora. This internal ecology is stratified into levels like tissues, organs and organ systems.

  • Externally speaking, each human being is part of a complex ecology that is stratified into many layers that cut across our relationships to each other and to the rest of the biosphere.

Greater complexity generated at higher levels like economics, sociology and psychology propagates all the way down to the cellular level. Conversely, greater complexity generated at a very low level affects all the levels sitting above it. This positive feedback loop accelerates total complexity generation.

Two concrete examples:

  • The notion of an ideal marriage has evolved drastically over time, along with the increasing complexity of our lives. Family as a unit is evolving for survival.

  • Successful people at the frontiers of science, technology, business and art all tend to be quirky and abnormal. (Read the older blog post Success as Abnormality for more details.) Through such people, an expansion of the fitness landscape at the cognitive level propagates up to an expansion at the societal level.


2.3. Positive Correlation between Fragility and Complexity Level

Overall fragility increases as complexity levels are piled up on top of each other. In order to ensure stability, it is necessary for each level to be more robust than the level above it. (Think of the stability of pyramid structures.)

Invention of nucleus by biological evolution is an illustrating example. Prokaryotes (cells without nucleus) are much more open to information (DNA) sharing than the eukaryotes (cells with nucleus) which depend on them. This makes them simpler but also more robust.

It could take eukaryotic organisms a million years to adjust to a change on a worldwide scale that bacteria [prokaryotes] can accommodate in a few years. By constantly and rapidly adapting to environmental conditions, the organisms of the microcosm support the entire biota, their global exchange network ultimately affecting every living plant and animal.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 30)

Whenever you see a long-lasting fragility, look for a source of robustness level below. Just as our mechanical machines and factories are maintained by us, we ourselves are maintained by even more robust networks. Each level should be grateful to the level below. 

Side Note: AI singularity people are funny. They seem to be completely ignorant about the basics of ecology. Supreme AI will be the single most fragile form of life. It can not take over the world. It can merely suffer from an illusion of control, just like we do. You can not destroy or control what is below you in the ecosystem. Survival of each level depends on the freedom of the level below. Just like we depend on the stability provided by freely evolving and information exchanging prokaryotes, supreme AI will depend on the stability provided by us.


2.4. Positive Correlation between Fragility and Firmness of Identity

How limited and rigid life becomes, in a fundamental sense, as it extends down the eukaryotic path. For the macrocosmic size, energy, and complex bodies we enjoy, we trade genetic flexibility. With genetic exchange possible only during reproduction, we are locked into our species, our bodies, and our generation. As it is sometimes expressed in technical terms, we trade genes "vertically" - through the generations - whereas prokaryotes trade them "horizontally" - directly to their neighbors in the same generation. The result is that while genetically fluid bacteria are functionally immortal, in eukaryotes sex becomes linked with death.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 93)

Biological entities that are more protective of their DNA (e.g. eukaryotes whose genes are packed into chromosomes residing inside nuclei) exhibit greater structural permanence. (We had reached a similar conclusion while discussing the junk DNA example in Section 1.2.) Eukaryotes are more precisely defined than prokaryotes, so to speak. Degree of flexibility correlates inversely with firmness of identity.

Firmer the identity gets, the more necessary death becomes. In other words, death is not a destroyer of identity, it is the reason why we can have identity in the first place. I suggest you to meditate on this fact for a while. (It literally changed my view on life.)

  • The reason why we are not at peace with the notion of death is that we are still not aware of how challenging it was for nature to invent the technologies necessary for maintaining identity through time.

  • Fear of death is based on the ego illusion, which Buddha rightly framed as the mother of all misrepresentations about nature. This is the story of a war between life and non-life, between biology and physics, not you against the rest of the universe or your genes against other genes.


3. Physics vs Biology

 
Physics vs Biology.png
 

Physics and biology (with chemistry as the degenerate middle ground) can be thought of as duals of each other, as forces pulling the universe in two opposite directions.

Side Note: Simple design is best done over a short period of time, in a single stroke, with the spirit of a master. Complex design is best done over a long period of time, in small steps, with the spirit of an amateur. That is essentially why physics progresses in a discontinuous manner via single-author papers by non-cooperative genius minds, while biology progresses in a continuous manner via many-author papers by cooperative social minds.


3.1. Entropy, Time and Scale

Note that entropy and time are two sides of the same coin:

  • Time is nothing but motion. Time without any motion is not something that mortals like us can fathom.

  • All motion happens basically due to the initial low-entropy state of the universe and the statistical thermodynamic evolution towards higher entropy states. (Universe somehow began in a very improbable state and now we are paying the “price” for it.) In other words, entropy is the force behind all motion. It is what makes time flow. The rest of physics just defines the degrees of freedom inside which entropy can work its magic (i.e. increase the disorder of the configuration space defined by the degrees of freedom), and specifies how time flow takes place via least action principles which allows one to infer the unique time evolution of a particle or a field from the knowledge of its beginning and ending states.

Side Note: It is not a coincidence that among all physics theories only thermodynamics could not be formulated in terms of a least action principle. Least action principles give you one dimensional (path) information that is inaccessible by experimentation. Basically, each experiment we do allows us to peak at the different time slices of the universe, and each least action principle we have allows us to view each pair of time slices as the beginning and ending states of a unique wholesome causal story. (We can not probe nature continuously.) Entropy on the other hand does not work on a causal basis. (If it did, then it could not be responsible for time flow.) It operates in a primordially acausal fashion.

When we flip the direction of time, thermodynamics starts working backwards and the energy landscape turns upside down. Time-flipped biological entities start harnessing order to create disorder, which is exactly what physics does.

The difference between physics and time-flipped biology is that former operates globally and harnesses the background order that originates from the initial low-entropy state of the universe and latter harnesses local patches of order created by itself. (This is why watching time-flipped physics videos is a lot more fun than watching time-flipped biology videos.)

Side Note: There are nano scale examples of biology harnessing order to create disorder. This is allowed by the statistical nature of the second law of thermodynamics which says that entropy increases only on average. Small divergences may occur over short intervals of time. Large divergences too may occur but they require much longer intervals of time.

The heart of the duality between physics and biology lies in this “global vs local” dichotomy which we will dig deeper in the next section.

It is worth reiterating here the fact that entropy breaks symmetries in the configuration space, not in geometric one. (It may even increase local order in geometric space by creating symmetric arrangements, as in spontaneous crystallisation, which disorders the momentum component of the configuration space via energy release.) Hence, strictly speaking, the “global vs local” dichotomy should not be interpreted purely in spatial terms. What time-flipped biology does is to harness local patches of configurational order (i.e. degrees of freedom associated with those locations), not spatial order.

Side Note: Entropy also triggers the breaking of some structural symmetries along the way. According to inflation theory, as the universe cooled and expanded from its initial hot and dense state, the primordial force split into the four forces (Gravitational, Electromagnetic, Weak Nuclear and Strong Nuclear) that we have today. (Again, as mentioned before, entropy is an odd ball among all physics theories and is not regarded as a force since it does not have an associated field etc.) This de-unification happened through a series of three spontaneous symmetry breakings, each of which took place at a different temperature threshold.

3.2. Entropy and Dynamical Scale Invariance

Imagine a very low-entropy universe that consists of an equal number of zeros and ones which are neatly separated into two groups. (This is a fantasy world with no forces. In other words, the only thing you can randomize is position. So the configuration space just consists of the real space since there are no other degrees of freedom.) Global uniformity of such a universe would be low, since there will be only fifty percent probability that any two randomly chosen local patches will look like each other. Local uniformity on the other hand would be high, since all local patches (except for those centered at the borderline separating the two groups) will either have a homogenous set of zeros or a homogenous set of ones.

Entropy can be seen as a local operator breaking local uniformities in the configuration space. Over time, the total configuration space starts to look the same no matter how much you zoom in or out. In other words, the universe becomes more and more dynamically scale invariant.

Note that entropy does not increase uniformity. It actually does the opposite and decreases uniformity across the board so that the discrepancy between local and global uniformity disappears. Close to heat death (maximum theoretical entropy), no two local patches in the configuration space will look like each other. (They will be random in different ways.)

Side Note: Due to the statistical nature of the second law of thermodynamics, universe will keep experiencing fluctuations to the very end. It can get arbitrarily close to heat death but will never actually reach it. Complete heat death means end of physics altogether.

Now a natural question to ask is whether there could have been other ways of achieving scale invariance? The answer is no and the blocker is an information problem. You can not have complete knowledge about the global picture without infinite energy at your disposal and without this knowledge you can not define a local operator that can achieve scale invariance. For instance, going back to our initial example, if your region of the universe happens to have no zeros, you would not even be able to define an operator that takes zeros into consideration. All you can really do is to just ask every local patch to scatter everything so that (hopefully) whatever is out there will end up proportionally in every single patch. Of course, this is exactly what entropy itself does. (It is this random, zero knowledge mechanism which gives thermodynamics its acausal nature.)

Biology on the other hand creates low entropy islands by dumping entropy elsewhere and thereby works against the trend towards dynamical scale invariance. It is exactly in this sense that biology is anti-entropic. Entropy is not neutralized or cancelled, instead it is deflected through a series of brilliant jiu jitsu strokes so that it defeats its own goal.

Physics fights for dynamical scale invariance by breaking local uniformities in the configuration space and biology fights against dynamical scale invariance by creating local uniformities in the configuration space. This is the essence of the duality between physics and biology, but there is a slight caveat: Physics works on a global scale and hails down on all local uniformities in an indiscriminate manner, while biology begins in some local patches in a discriminate manner and slowly makes its way up to global scale, conquering physics from inside out, pushing entropy to the peripheries. (Biology needs to be discriminative since only certain locations are convenient to jumpstart life, and it needs to learn since - unlike physics - it does not have the privilege of starting global.)

Let us now scroll all the way to the end of time to see what this duality means for the fate of our universe.


3.3. Ultimate Fate of the Universe

There is no current scientific consensus about the ultimate fate of the universe. Some cosmologists believe in the inexhaustible expansion and the eventual heat death, some others believe in the unavoidable collapse and the subsequent bounce. Since nobody has any idea about how dark energy, dark matter and quantum gravity actually work, everything is basically up grabs.

Side Note: Dark energy is uniformly-distributed and non-interacting. It is posited to be the driving factor behind the acceleration of the uniform expansion of space. Dark matter on the other hand is non-uniformly-distributed and gravitationally-attractive. Together dark energy and dark matter make up around 95 percent of the total energy content of the universe. Hence the reason why some people call junk DNA, which make up 98 percent of human genome, as the dark sector of DNA. Funnily enough, in a similar fashion, more than 90 percent of the more evolved (white matter) part of the human brain is composed of non-neuron (glial) cells . (Neurons in the white matter, as opposed to those in the gray matter, are myelinated and therefore conduct electricity at a much higher speed.) It seems like the degree of complexity of an evolving system is directly correlated with the degree of dominance of the modulator (e.g. non-neuron cells, non-coding DNA) against the modulated (e.g. neurons cells, coding DNA). Could the prevalence of the dark sector be interpreted as an evidence that physics itself is undergoing evolution? (Note that, in all cases, the scientific discovery of the modulator occurred quite late and with a great deal of astonishment. Whenever we see a variation exhibiting substructure, we should immediately suspect that it is modulated by its complement.)

One thing that is conspicuously left out of these discussions is life itself. Everyone basically assumes that entropy will eventually win. After all even supermassive black holes will inevitably evaporate due to Hawking radiation. Who would give a chance to a phenomenon (like life) that is close to non-existent at the grand cosmological scales?

Well, I am actually super optimistic about the future of life. It is hard not to be so after one studies (in complete awe) how far evolution has progressed in just a few billion years. Life is learning at a phenomenal speed and will figure out (before it gets too late) how to do cosmic-scale engineering.

Since no one really knows anything about the dynamics of a cosmic bounce (and how it interacts with thermodynamics), let us finish this long blog post with some fun speculations:

  • The never ending war between physics and biology may be the reason why time still exists and the universe still keeps on managing to collapse on itself while also averting a heat death. Life could have learned how to engineer an early collapse before a heat death or how to prevent a heat death long enough for a collapse. Life could have even learned how to leave a local fine-tuned low-entropy quantum imprint so that it is guaranteed to reemerge after the big bounce.

  • What if life always reaches total control in the sense of Section 1 in each one of the cosmic cycles and becomes indistinguishable from its environment? Could the beginning state of this universe’s physics be the ending state of the previous universe’s biology? In other words, could our entire universe be an extremely advanced life form? Could this be the god described by Pantheists? Was Schopenhauer right in the sense that the most fundamental aspect of reality is its primordial will to live? Is the acausal nature of thermodynamics a form of pure volition?