emergence of life

Cardiac rhythm is a good example of a network that includes DNA only as a source of protein templates, not as an integral part of the oscillation network. If proteins were not degraded and needing replenishment, the oscillation could continue indefinitely with no involvement of DNA...

Functional networks can therefore float free, as it were, of their DNA databases. Those databases are then used to replenish the set of proteins as they become degraded. That raises several more important questions. Which evolved first: the networks or the genomes? As we have seen, attractors, including oscillators, form naturally within networks of interacting components, even if these networks start off relatively uniform and unstructured. There is no DNA, or any equivalent, for a spiral galaxy or for a tornado. It is very likely, therefore, that networks of some kinds evolved first. They could have done so even before the evolution of DNA. Those networks could have existed by using RNA as the catalysts. Many people think there was an RNA world before the DNA-protein world. And before that? No one knows, but perhaps the first networks were without catalysts and so very slow. Catalysts speed-up reactions. They are not essential for the reaction to occur. Without catalysts, however, the processes would occur extremely slowly. It seems likely that the earliest forms of life did have very slow networks, and also likely that the earliest catalysts would have been in the rocks of the Earth. Some of the elements of those rocks are now to be found as metal atoms (trace elements) forming important parts of modern enzymes.

Noble - Dance to the Tune of Life (Pages 83, 86)

Darwin unlocked evolution by understanding its slow nature. (He was inspired by the recent geological discoveries indicating that water - given enough time - can carve out entire canyons.) Today we are still under the influence of a similar Pre-Darwinian bias. Just as we were biased in favor of fast changes (and could not see the slow moving waves of evolution), we are biased in favor of fast entities. (Of course, what is fast or slow is defined with respect to the rate of our own metabolisms.) For instance, we get surprised when we see a fast-forwarded video of growing plants, because we equate life with motion and regard slow moving life forms as inferior.

Evolution favors the fast and therefore life is becoming increasingly faster at an increasingly faster rate. Imagine catalyzed reactions, myelinated neurons etc. Replication is another such accelerator technology. Although we tend to view it as a must-have quality of life, what is really important for the definition of life is repeating "patterns” and such patterns can emerge without any replication mechanisms. In other words, what matters is persistence. Replication mechanisms speed up the evolution of new forms of persistence. That is all. Let me reiterate again: Evolution has only two ingredients, constant variation and constant selection. (See Evolution as a Physical Theory post) Replication is not fundamental.

Unfortunately most people still think that replicators came first and led to the emergence of functional (metabolic) networks later, although this order is extremely unlikely since replicators have an error-correction problem and need supportive taming mechanisms (e.g. metabolic networks) right from the start.

In our present state of ignorance, we have a choice between two contrasting images to represent our view of the possible structure of a creature newly emerged at the first threshold of life. One image is the replicator model of Eigen, a molecular structure tightly linked and centrally controlled, replicating itself with considerable precision, achieving homeostasis by strict adherence to a rigid pattern. The other image is the "tangled bank" of Darwin, an image which Darwin put at the end of his Origin of Species to make vivid his answer to the question, What is Life?, an image of grasses and flowers and bees and butterflies growing in tangled profusion without any discernible pattern, achieving homeostasis by means of a web of interdependences too complicated for us to unravel.

The tangled bank is the image which I have in mind when I try to imagine what a primeval cell would look like. I imagine a collection of molecular species, tangled and interlocking like the plants and insects in Darwin's microcosm. This was the image which led me to think of error tolerance as the primary requirement for a model of a molecular population taking its first faltering steps toward life. Error tolerance is the hallmark of natural ecological communities, of free market economies and of open societies. I believe it must have been a primary quality of life from the very beginning. But replication and error tolerance are naturally antagonistic principles. That is why I like to exclude replication from the beginnings of life, to imagine the first cells as error-tolerant tangles of non-replicating molecules, and to introduce replication as an alien parasitic intrusion at a later stage. Only after the alien intruder has been tamed, the reconciliation between replication and error tolerance is achieved in a higher synthesis, through the evolution of the genetic code and the modern genetic apparatus.

The modern synthesis reconciles replication with error tolerance by establishing the division of labor between hardware and software, between the genetic apparatus and the gene. In the modem cell, the hardware of the genetic apparatus is rigidly controlled and error-intolerant. The hardware must be error-intolerant in order to maintain the accuracy of replication. But the error tolerance which I like to believe inherent in life from its earliest beginnings has not been lost. The burden of error tolerance has merely been transferred to the software. In the modern cell, with the infrastructure of hardware firmly in place and subject to a strict regime of quality control, the software is free to wander, to make mistakes and occasionally to be creative. The transfer of architectural design from hardware to software allowed the molecular architects to work with a freedom and creativity which their ancestors before the transfer could never have approached.

Dyson - Infinite in All Directions (Pages 92-93)

Notice how Dyson frames replication mechanisms as stabilizers allowing metabolic networks to take even further risks. In other words, replication not only speeds up evolution but also enlarges the configuration space for it. So we see not only more variation per second but also more variation at any given time.

Going back to our original question…

Life was probably unimaginably slow at the beginning. In fact, such life forms are probably still out there. Are spiral galaxies alive for instance? What about the entire universe? We may be just too local and too fast to see the grand patterns.

As Noble points out in the excerpt above, our bodies contain catalyst metals which are remnants of our deep past. Those metals were forged inside stars far away from us and shot across the space via supernova explosions. (This is how all heavy atoms in the universe got formed.) In other words, they used to be participants in vast-scale metabolic networks.

In some sense, life never emerged. It was always there to begin with. It is just speeding up over time and thereby life forms of today are becoming blind to life form of deep yesterdays.

It is really hard not to be mystical about all this. Have you ever felt bad about disrupting repeating patterns for instance, no matter how physical they are? You can literally hurt such patterns. They are the most embryonic forms of life, some of which are as old as those archaic animals who still hang around in the deep oceans. Perhaps we should all work a little on our artistic sensitivities which would in turn probably give rise to a general increase in our moral sensitivities.


How Fast Will Things Get?

Life is a nested hierarchy of complexity layers and the number of these layers increases overtime. We are already forming many layers above ourselves, the most dramatic of which is the entirety of our technological creations, namely what Kevin Kelly calls as Technium.

Without doubt, we will look pathetically slow for the newly emerging electronic forms of life. Just as we have a certain degree of control over the slow-moving plants, they too (will need us but also) harvest us for their own good. (This is already happening as we are becoming more and more glued to our screens.)

But how much faster will things eventually get?

According to the generally accepted theories, our universe started off with a big bang and went through a very fast evolution that resulted in a sudden expansion of space. While physics has since been slowing down, biology (including new electronic forms) is picking up speed at a phenomenal rate.

Of all the sustainable things in the universe, from a planet to a star, from a daisy to an automobile, from a brain to an eye, the thing that is able to conduct the highest density of power - the most energy flowing through a gram of matter each second - lies at the core of your laptop.

Kelly - What Technology Wants (Page 59)

Evolution seems to be taking us to a very strange end, an end that seems to contain life forms that exhibit features that are very much like those exhibited by the beginning states of physics, extreme speed and density. (I had brought up this possibility at the end of Evolution as a Physical Theory post as well.)

Of course, flipping this logic, the physical background upon which life is currently unfolding is probably alive as well. I personally believe that this indeed is the case. To understand what I mean, we will first need to make an important conceptual clarification and then dive into Quantum Mechanics.



Autonomy as the Flip-Side of Control

Autonomy and control are two sides of the same coin, just like one man's freedom fighter is always another man's terrorist. In particular, what we can not exert any control over looks completely autonomous to us.

But how do you measure autonomy?

Firstly, notice that autonomy is a relative concept. In other words, nothing can be autonomous in and of itself. Secondly, the degree of autonomy correlates with the degree of unanticipatability. For instance, something will look completely autonomous to you only if you can not model its behavior at all. But how would such a behavior literally look like, any guesses? Yes, that is right, it would look completely random.

Random often means inability to predict... A random series should show no discernible pattern, and if one is perceived then the random nature of the series is denied. However, the inability to discern a pattern is no guarantee of true randomness, but only a limitation of the ability to see a pattern... A series of ones and noughts may appear quite random for use as a sequence against which to compare the tossing of a coin, head equals one, tails nought, but it also might be the binary code version of a well known song and therefore perfectly predictable and full of pattern to someone familiar with binary notation.

Shallis - On Time (Pages 122-124)

The fact that randomness is in the eye of the beholder (and that absolute randomness is an ill-defined notion) is the central tenet of Bayesian school of probability. The spirit is also similar to how randomness is defined in algorithmic complexity theory, which I do not find surprising at all since computer scientists are empiricists at heart.

Kolmogorov randomness defines a string (usually of bits) as being random if and only if it is shorter than any computer program that can produce that string. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program whose length is shorter than the length of the string itself. A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.

Wikipedia - Kolmogorov Complexity

Here a completely different terminology is used to say basically the same thing:

  • “compressibility” = “explanability” = “anticipatability”

  • “randomness can only be defined relative to a specific choice of a universal computer” = “randomness is in the eye of the beholder”



Quantum Autonomy

Quantum Mechanics has randomness built into its very foundations. Whether this randomness is absolute or the theory itself is currently incomplete is not relevant. There is a maximal degree of unanticipatability (i.e. autonomy) in Quantum Mechanics and it is practically uncircumventable. (Even the most deterministic interpretations of Quantum Mechanics lean back on artificially introduced stochastic background fields.)

Individually quantum collapses are completely unpredictable, but collectively they exhibit a pattern over time. (For more on such structured forms of randomness, read this older blog post.) This is actually what allows us to tame the autonomy of quantum states in practice: Although we can not exert any control over them at any point in time, we can control their behavior over a period of time. Of course, as life evolves and gets faster (as pointed out in the beginning of this post), it will be able to probe time periods at more and more frequent rates and thereby tighten its grip on quantum phenomena increasingly more.

Another way to view maximal unanticipatability is to frame it as maximal complexity. Remember that every new complexity layer emerges through a complexification process. Once a functional network with a boundary becomes complex enough, it starts to behave more like an “actor” with teleological tendencies. Once it becomes ubiquitous enough, it starts to display an ensemble-behavior of its own, forming a higher layer of complexity and hiding away its own internal complexities. All fundamentally unanticipatable phenomena in nature are instances of such actors who seem to have a sense of unity (a form of consciousness?) that they “want” to preserve.

Why should quantum phenomena be an exception? Perhaps Einstein was right and God does not play dice, and that there are experimentally inaccessible deeper levels of reality from which quantum phenomena emerge? (Bohm was also thinking this way.) Perhaps it is turtles all the way down (and up)?

Universe as a Collection of Nested Autonomies

Fighting for power is the same thing as fighting for control, and gaining control of something necessitates outgrowing the complexity of that thing. That is essentially why life is becoming more complex and autonomous over time.

Although each complexity layer can accommodate a similar level of maximal complexity within itself before starting to spontaneously form a new layer above itself, due to the nested nature of these layers, total complexity rises as new layers emerge. (e.g. We are more complex than our cells since we contain their complexity as well.)

It is not surprising that social sciences are much less successful than natural sciences. Humans are not that great at modeling other humans. This is expected. You need to out-compete in complexity what you desire to anticipate. Each layer can hope to anticipate only the layers below it. Brains are not complex enough to understand themselves. (It is amazing how we equate smartness with the ability to reason about lower layers like physics, chemistry etc. Social reasoning is actually much more sophisticated, but we look down on it since we are naturally endowed with it.)

Side Note: Generally speaking, each layer can have generative effects only upwards and restrictive effects only downwards. Generative effects can be bad for you as in having cancer cells and restrictive effects can be good for you as in having a great boss. Generative effects may falsely look restrictive in the sense that what generates you locks you in form, but it is actually these effects themselves which enable the exploration of the form space in the first place. Think at a population level, not at an individual level. Truth resides there.

Notice that as you move up to higher levels, autonomy becomes harder to describe. Quantum Mechanics, which currently seems to be the lowest level of autonomy, is open to mathematical scrutiny, but higher levels can only be simulated via computational methods and are not analytically accessible.

I know, you want to ask “What about General Relativity? It describes higher level phenomena.” My answer to that would be “No, it does not.”

General Relativity does not model a higher level complexity. It may be very useful today but it will become increasingly irrelevant as life dominates the universe. As autonomy levels increase all over, trying to predict galactic dynamics with General Relativity will be as funny and futile as using Fluid Dynamics to predict the future carbon dioxide levels in the atmosphere without taking into consideration the role of human beings. General Relativity models the aggregate dynamics of quantum “decisions” made at the lowest autonomy level. (We refer to this level-zero as “physics”.) It is predictive as long as higher autonomy levels do not interfere.

God as the Highest Level of Autonomy

The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God.

Freeman Dyson - Progress in Religion

I remember the moment when I ran into this exhilarating paragraph of Dyson. It was so relieving to find such a high-caliber thinker who also interprets quantum randomness as choice-making. Nevertheless, with all due respect, I would like to clarify two points that I hope will help you understand Dyson’s own personal theology from the point of view of the philosophy outlined in this post.

  • There are many many levels of autonomies. Dyson points out only the most obvious three. (He calls them “minds” rather than autonomies.)

    • Atomic. Quantum autonomy is extremely pure and in your face.

    • Human. A belief in our own autonomy comes almost by default.

    • Cosmic. Universe as a whole feels beyond our understanding.

  • Dyson defines God as “what mind becomes when it has passed beyond the scale of our comprehension” and then he refers to the entirety of the universe as God as well. I on the other hand would have defined God as the top level autonomy and not referred to human beings or the universe at all, for the following two reasons:

    • God should not be human centric. Each level should be able to talk about its own God. (There are many things out there that would count you as part of their God.)

      • Remember that the levels below you can exert only generative efforts towards you. It is only the above-levels that can restrict you. In other words, God is what constraints you. Hence, striving for freedom is equivalent to striving for Godlessness. (It is no surprise that people turn more religious when they are physically weak or mentally susceptible.) Of course, complete freedom is an unachievable fantasy. What makes humans human is the nurturing (i.e. controlling) cultural texture they are born into. In fact, human babies can not even survive without a minimal degree of parental and cultural intervention. (Next time you look into your parents’ eyes, remember that part of your God resides in there.) Of course, we also have a certain degree of freedom in choosing what to be governed by. (Some let money govern them for instance.) At the end of the day, God is a social phenomenon. Every single higher level structure we create (e.g. governments selected by our votes, algorithms trained on our data) governs us back. Even the ideas and feelings we restrict ourselves by arise via our interactions with others and do not exist in a vacuum.

    • Most of the universe currently seems to exhibit only the lowest level of autonomy. Not everywhere is equally alive.

      • However, as autonomy reaches higher levels, it will expand in size as well, due to the nested and expansionary nature of complexity generation. (Atomic autonomy lacks extensiveness in the most extreme sense.) So eventually the top level autonomy should grow in size and seize the whole of reality. What happens then? How can such an unfathomable entity exercise control over the entire universe, including itself? Is not auto-control paradoxical in the sense that one can not out-compete in complexity oneself? We should not expect to be able to answer such tough questions, just like we do not expect a stomach cell to understand human consciousness. Higher forms of life will be wildly different and smarter than us. (For instance, I bet that they will be able to manipulate the spacetime fabric which seems to be an emergent phenomenon.) In some sense, it is not surprising that there is such a proliferation of religions. God is meant to be beyond our comprehension.

Four men, who had been blind from birth, wanted to know what an elephant was like; so they asked an elephant-driver for information. He led them to an elephant, and invited them to examine it; so one man felt the elephant's leg, another its trunk, another its tail and the fourth its ear. Then they attempted to describe the elephant to one another. The first man said ”The elephant is like a tree”. ”No,” said the second, ”the elephant is like a snake“. “Nonsense!” said the third, “the elephant is like a broom”. ”You are all wrong,” said the fourth, ”the elephant is like a fan”. And so they went on arguing amongst themselves, while the elephant stood watching them quietly.

- The Indian folklore story of the blind men and the elephant, as adapted from E. J. Robinson’s Tales and Poems of South India by P. T. Johnstone in the Preface of Sketches of an Elephant

appeal of the outrageous

We should perhaps also add to this list of criteria the response from the famous mathematician John Conway to the question of what makes a great conjecture: “It should be outrageous.” An appealing conjecture is also somewhat ridiculous or fantastic, with unforeseen range and consequences. Ideally it combines components from distant domains that haven’t met before in a single statement, like the surprising ingredients of a signature dish.

Robbert Dijkgraaf - The Subtle Art of the Mathematical Conjecture

We are used to click-bait new with outrageous titles that incite your curiosity. This may look like a one-off ugly phenomenon, but it is not. As consumers of information, we display the same behavior everywhere. This is forcing even scientists to produce counter-intuitive papers with outrageous titles so that they can attract the attention of the press. (No wonder why most published research is false!)

Generally speaking, people do not immediately recognize the importance of an emerging matter. Even in mathematics, you need to induce a shock to spur activity and convince others join you in the exploration of a new idea.

In 1872, Karl Weierstrass astounded the mathematical world by giving an example of a function that is continuous at every point but whose derivative does not exist anywhere. Such a function defied geometric intuition about curves and tangent lines, and consequently spurred much deeper investigations into the concepts of real analysis.

Robert G. Bartle &‎ Donald R. Sherbert - Introduction to Real Analysis (Page 163)

Similar to the above example, differential topology became a subject on its own and attracted a lot of attention only after John Milnor shocked the world by showing that 7 dimensional sphere admits exactly 28 different oriented diffeomorphism classes of differentiable structures. (Why 28, right? It actually marks the beginning of one of the most amazing number sequences in mathematics.)

where extremes meet

Here are five examples where extremes meet and result in sameness despite the diametrically opposed states of mind.

Happiness

Pathologically happy ones do not worry because they do not realize that there is anything worth worrying about. Severely depressed ones do not give a shit about anything neither, but theirs is a wise apathy that knows itself.


Knowledge

Knowledge has two extremes which meet; one is the pure natural ignorance of every man at birth, the other is the extreme reached by great minds who run through the whole range of human knowledge, only to find that they know nothing and come back to the same ignorance from which they set out, but it is a wise ignorance which knows itself.

- Blaise Pascal

Reality

That Nirvana and Samsara are one is a fact about the nature of the universe; but it is a fact which cannot be fully realized or directly experienced, except by souls far advanced in spirituality.

Aldous Huxley - The Perennial Philosophy (Page 70)

Empathy

One study found that the most empathetic nurses were most likely to avoid dying patients early in their training, before they had learned to deal with the distress caused by empathizing too much. Overempathy can look from the outside like selfishness - and even produce selfish behavior.

Bruce D. Perry - Born for Love (Page 44)

Sense of Heat

The human sense of hot or cold exhibits the queer feature of ‘les extremes se touchent’: if we inadvertently touch a very cold object, we may for a moment believe that it is hot and has burnt our fingers.

Erwin Schrödinger - Mind and Matter (Page 158)

charisma and meaning as rapid expansions

Charisma is geometric phenomenon, generated via a rapid spatiotemporal expansion of the self within the physical space.

Next time you enter that Japanese restaurant enter the place as if you own it and eat that edamame like you have been eating it for the last one hundred years.


Meaning is a topological phenomenon, generated via a rapid spatiotemporal expansion of the self within the social graph.

The crusader's life gains purpose by suborning his heart and soul to a cause greater than himself; the traditionalist finds the transcendent by linking her life to traditions whose reach extend far past herself.

Tanner Greer - Questing for Transcendence

genius vs wisdom

Genius maxes out upon birth and gradually diminishes. Wisdom displays the opposite dynamics. It is nonexistent at birth and gradually builds up until death. That is why genius is often seen as a potentiality and wisdom as an actuality. (Youth have potentiality, not the old.)

Midlife crises tend to occur around the time when wisdom surpasses genius. That is why earlier maturation correlates with earlier “mid” life crisis. (On the other hand, greater innate genius does not result in a delayed crisis since it entails faster accumulation of wisdom.)


"Every child is an artist. The problem is how to remain an artist once we grow up."
- Pablo Picasso

Here Picasso is actually asking you to maintain your genius at the expense of gaining less wisdom. That is why creative folks tend to be quite unwise folks (and require the assistance of experienced talent managers to succeed in the real world). They methodologically wrap themselves inside protective environments that allow them to pause or postpone their maturation.

Generally speaking, the greater control you have over your environment, the less wisdom you need to survive. That is why wisest people originate from low survival-rate tough conditions, and rich families have hard time raising unspoiled kids without simulating artificial scarcities. (Poor folks have the opposite problem and therefore simulate artificial abundances by displaying more love, empathy etc.)


"Young man knows the rules and the old man knows the exceptions."
- Oliver Wendell Holmes Sr.

Genius is hypothesis-driven and wisdom is data-driven. That is why mature people tend to prefer experimental (and historical) disciplines, young people tend to dominate theoretical (and ahistorical) disciplines etc.

The old man can be rigid but he can also display tremendous cognitive fluidity because he can transcend the rules, improvise and dance around the set of exceptions. In fact, he no longer thinks of the exceptions as "exceptions" since an exception can only be defined with respect to a certain collection of rules. He directly intuits them as unique data points and thus is not subject to the false positives generated by operational definitions. (The young man on the other hand has not explored the full territory of possibilities yet and thus needs a practical guide no matter how crude.)

Notice that the old man can not transfer his knowledge of exceptions to the young man because that knowledge is in the form of an ineffable complex neural network that has been trained on tons of data. (Apprentice-master relationships are based on mimetic learning.) Rules on the other hand are much more transferable since they are of linguistic nature. (They are not only transferable but also a lot more compact in size, compared to the set of exceptions.) Of course, the fact that rules are transferable does not mean that the transfers actually occur! (Trivial things are deemed unworthy by the old man and important things get ignored by the young man. It is only the stuff in the middle that gets successfully transferred.)

Why is it much harder for old people to change their minds? Because wisdom is data-driven, and in a data-driven world, bugs (and biases) are buried inside large data sets and therefore much harder to find and fix. (In a hypothesis driven world, all you need to do is to go through the much shorter list of rules, hypotheses etc.)


The Hypothesis-Data duality highlighted in the previous section can be recast as young people being driven more by rational thinking vs. old people being driven more by intuitional thinking. (In an older blog post, we had discussed how education should focus on cultivating intuition, which leads to a superior form of thinking.)

We all start out life with a purely intuitive mindset. As we learn we come up with certain heuristics and rules, resulting in an adulthood that is dominated by rationality. Once we accumulate enough experience (i.e. data), we get rid of these rules and revert back to an intuitive mindset, although at a higher level than before. (That is why the old get along very well with kids.)

Artistic types (e.g. Picasso) tend to associate genius with the tabula-rasa intuitive fluidity of the newborn. Scientific types tend to associate it with the rationalistic peak of adulthood. (That is why they start to display insecurities after they themselves pass through this peak.)

As mentioned in the previous section, rules are easily transferable across individuals. Results of intuitive thinking on the other hand are non-transferable. From a societal point of view, this is a serious operational problem and the way it is overcome is through a mechanism called “trust”. Since intuition is a black box (like all machine learning models are), the only way you can transfer it is through a wholesome imitation of the observed input-outputs. (i.e. mimetic learning) In other words, you can not understand black box models, you can only have faith in them.

As we age and become more intuition-driven, our trust in trust increases. (Of course, children are dangerously trustworthy to begin with.) Adulthood on the other hand is dominated by rational thinking and therefore corresponds to the period when we are most distrustful of each other. (No wonder why economists are such distrustful folks. They always model humans as ultra-rationalistic machines.)

Today we vastly overvalue the individual over the society, and the rational over the intuitional. (Just look at how we structure school curriculums.) We decentralized society and trivialized the social fabric by centralizing trust. (Read the older blogpost Blockchain and Decentralization) We no longer trust each other because we simply do not have to. Instead we trust the institutions that we collectively created. Our analytical frameworks have reached an individualist zenith in Physics which is currently incapable of guaranteeing the reality of other peoples’ points of view. (Read the older blogpost Reality and Analytical Inquiry) We banished faith completely from public discourse and have even demanded God to be verifiable.

In short, we seem to be heading to the peak adulthood phase of humanity, facing a massive mid-life crisis. Our collective genius has become too great for our own good.

In this context, the current rise of data-driven technological paradigms is not surprising. Humanity is entering a new intuitive post-midlife-crisis phase. Our collective wisdom is now being encoded in the form of disembodied black-box machine-learning models which will keep getting more and more sophisticated over time. (At some point, we may dispense with our analytical models altogether.) Social fabric on the other hand will keep being stretched as more types of universally-trusted centralized nodes emerge and enable new forms of indirect intuition transfer.

Marx was too early. He viewed socialism in a human way as a rationalistic inevitability, but it will probably arrive in an inhuman fashion via intuitionistic technologies. (Calling such a system still as socialism will be vastly ironic since it will be resting on complete absence of trust among individuals.) Of course, not every decision making will be centralized. Remember that the human mind itself emerged for addressing non-local problems. (There is still a lot of local decision making going on within our cells etc.) The “hive” mind will be no different, and as usual, deciding whether a problem in the gray zone is local or non-local will be determined through a tug-of-war.

The central problem of ruler-ship, as Scott sees it, is what he calls legibility. To extract resources from a population the state must be able to understand that population. The state needs to make the people and things it rules legible to agents of the government. Legibility means uniformity. States dream up uniform weights and measures, impress national languages and ID numbers on their people, and divvy the country up into land plots and administrative districts, all to make the realm legible to the powers that be. The problem is that not all important things can be made legible. Much of what makes a society successful is knowledge of the tacit sort: rarely articulated, messy, and from the outside looking in, purposeless. These are the first things lost in the quest for legibility. Traditions, small cultural differences, odd and distinctive lifeways … are all swept aside by a rationalizing state that preserves (or in many cases, imposes) only what it can be understood and manipulated from the 2,000 foot view. The result, as Scott chronicles with example after example, are many of the greatest catastrophes of human history.

Tanner Greer - Tradition is Smarter Than You

connectivity and cultural diversity

Intergenerational cultural meme transfer mechanisms have all broken down. Instead of asking our own grand parents about their child rearing practices, we all go to the same search engine and click on the same links. We all watch the same movies, read the same books. Greater connectivity has brought us lesser diversity. We seem to be heading towards a single monoculture as social trends propagate at the speed of light through the fiber optic cables.

Why should we worry? Just scroll back in time and look at the rise and fall of civilizations. Why have certain cultures prevailed during certain periods? When brute force worked, the brute won. When ideas became important, the cerebral won. There are of course many reasons why developing countries have hard time catching up, but one important aspect is cultural. Some cultures are just not meant to be successful in today’s environment and this is normal. (Inspect those countries that did indeed catch up, you will find cultural discontinuity, widespread debasement and confusion of values.)

Tomorrow conditions will change. We need to maintain diversity to be able to cope with those upcoming changes which we can not fathom today.

Postmodernists are right in the sense that no culture is superior to another in an absolute sense. However, this does not mean that all cultures are equal. Relative to a certain context or problem, we can objectively talk about some cultures being fitter than others. (Remove the context, any comparison becomes impossible.)

Note that, when one culture assimilates another, it selfishly hedges itself against the future possibility of losing the evolutionary upper hand. In other words, it prolongs its own survival at the expense of decreasing the adaptivity of the whole.

truth as status quo

We now have the science that argues how you're supposed to go about building something that doesn't have these echo chamber problems, these fads and madnesses. We're beginning to experiment with that as a way of curing some of the ills that we see in society today. Open data from all sources, and this notion of having a fair representation of the things that people are actually choosing, in this curated mathematical framework that we know stamps out echoes and fake news.

The Human Strategy

Fads and echo chambers provide the means to break positive feedback loops (by helping us counter them with virtual positive feedback loops) and get out of bad equilibriums (by helping us cross the critical thresholds necessary to initiate change). Preventing illusion is akin to preventing progress. Every new truth starts with untruth. Future will be in conflict with today. Today’s new reality is yesterday’s false belief.

It is startling to realize how much unbelief is necessary to make belief possible.

Eric Hoffer - The True Believer (Page 79)

We are constructors of our social world as well as receivers.That is why companies like Facebook should never be involved in this war against “fake news”. Truth is inherently political. Algorithms for sniffing it out will inevitably end up defending the status quo.

uçak vs tren yolculuğu

Tren yolculuklarına bayılıyorum. Uçakla bir yere gitmek kesinlikle aynı keyfi vermiyor.

  • Uçağa binip inersiniz, arada ne olduğu ise muammadır. Yolculuğunuzu ancak dijital bir harita üzerinden soyut bir şekilde takip edebilirsiniz, o kadar. Oysa trende koca koca camlardan dışarıyı bir film şeridi gibi izleyebilirsiniz. Bir yere doğru ilerlediğinizi görsel açıdan tam anlamıyla hissedersiniz.

  • Rayların sesi kalp atışı gibidir, ritminden hızınızı kestirebilirsiniz. Uçakta ise doğal olmayan kesintisiz bir uğultu vardır, doğal olmayan sabitlikteki hızınıza eşlik eden.

  • Uçakta herkes aynı yöne bakar, daha doğrusu herkes önündeki ekrana bakar. Ortak bir deneyim filan yoktur. Trende ise karşılıklı oturabilirsiniz. Paylaşımlı masalar garip bir sıcaklık yaratır.

  • Uçakta türbülansın bile keyfi yoktur, her şey önceden anons yapılır. Trende ise aniden savruluverirsiniz. Herkesin paylaştığı güzel bir hazırlıksızlık durumu vardır. Ayrıca her savrulmanızın da bir sebebi vardır. Tren yolundaki kavisler geçtikleri coğrafyalarla şekillenirler. Yani savrulmalarınız (camdan bakarak kolayca gözlemleyebileceğiniz) harmonilerden kaynaklanır. Uçakta yaşanan türbülanslar ise anlamsız bir soyutlukta gerçekleşir. Niye gelirler, niye geçerler belli değildir. Saf belirsizlik saf gerginlikliklere yol açar.

  • Uçak yolculuğu genel olarak kasıntı bir deneyimdir. Havalimanları şehrin dışındadır, gitmek derttir. Gitsen uçağa binmek derttir. Uçağa binsen sonrası bin bir türlü çiledir.

  • Bir sonraki durakta inebilme özgürlüğü o kadar güzel bir özgürlüktür ki! Kesintili de olsa bir nefes kaynağıdır. Her istasyonda bir sigara yakıp sonra trene dönenleri izlemek bile keyiflidir.

reality and analytical inquiry

What is real and out there? This question is surprisingly hard to answer.

The only way we seem to be able to define ontology is as shared epistemology. (Every other definition suffers from an immediate deficiency.) In other words, what is real is what every possible point of view agrees upon, and vice versa. There is no such thing as your reality. (Note that this definition breaks the duality between ontology and epistemology. The moment you make inferences about the former, it gets subsumed by the latter. Is this surprising? Epistemology is all about making inferences. In other words, the scientific method itself is what is breaking the duality.)

Now we have a big problem: Ontological changes can not be communicated to all points of view at the same time in an instantaneous manner. This is outlawed by the finiteness of the speed of the fastest causation propagator which is usually taken as light. In fact, according to our current understanding of physics, there seems to be nothing invariant across all points of view. (e.g. Firewall paradox, twin paradox etc.) Whenever we get our hands onto some property, it slips away with the next advance in our theories.

This is a weird situation, an absolute mind-fuck to be honest. If we endorse all points of views, we can define ontology but then nothing seems to be real. If we endorse only our point of view, we can not define ontology at all and get trapped in a solipsistic world where every other point of view becomes unreal and other people turn into zombies.

Could all different points of views be part of a single - for lack of a better term “God” - point of view? In this case, our own individual point of view becomes unreal. This is a bad sacrifice indeed, but could it help us salvage reality? Nope… Can the universe observe itself? The question does not even make any sense!

It seems like physics can not start off without assuming a solipsistic worldview, adopting a single coordinate system which can not be sure about the reality of other coordinate systems.

In an older blog post, I had explained how dualities emerge as byproducts of analytical inquiry and thereby artificially split the unity of reality. Here we have a similar situation. The scientific method (i.e. analytical inquiry) is automatically giving rise to solipsism and thereby artificially splitting the unity of reality into considerations from different points of views.

In fact, the notions of duality and solipsism are very related. To see why, let us assume that we have a duality between P and not-P. Then

  • Within a single point of view, nothing can satisfy both P and not-P.

  • No property P stays invariant across all points of views.

Here, the first statement is a logical necessity and the second statement is enforced upon us by our own theories. We will take the second statement as the definition of solipsism.

Equivalently, we could have said

  • If property P holds from the point of view of A and not-P holds from the point of view of B, then A can not be equal to B.

  • For every property P, there exists at least one pair (A,B) such that A is not equal to B and P holds from the point of view of A while not-P holds from the point of view of B.

Now let X be the set of pairs (A,B) such that P holds from the point of view of A and not-P holds from the point of view of B. Also let △ stand for the diagonal set consisting of pairs (A,A). Then the above statements become

  • X can not hit △.

  • X can not miss the complement of △.

Using just mathematical notation we have

  • X ∩ △ = ∅

  • X ∩ △’ ≠ ∅

In other words, dualities and solipsism are defined using the same ingredients! Analytical inquiry gives rise to both at the same time. It supplies you labels to attach to reality (via the above equality) but simultaneously takes the reality away from you (via the above inequality). Good deal, right? After all (only) nothing comes for free!

Phenomena are the things which are empty of inherent existence, and inherent existence is that of which phenomena are empty.

Jeffrey Hopkins - Meditations on Emptiness (Page 9)


Recall that at the beginning post we had defined ontology as shared epistemology. One can also go the other way around and define epistemology as shared ontology. What does this mean?

  • To say that some thing exists we need every mind to agree to it.

  • To say that some statement is true we need every body to obey to it.

This is actually how truth is defined in model theory. A statement is deemed true if only if it holds in every possible embodiment.

In this sense, epistemology-ontology duality mirrors mind-body duality. (For a mind, the reality consists of bodies and what is fundamental is existence. For a body, the reality consist of minds and what is fundamental is truth.) For thousands of years, Western philosophy has been trying to break this duality which has popped up in various forms. Today, for instance, physicists are still debating whether “it” arouse from “bit” or “bit” arose from “it”.

Let us now do another exercise. What is the epistemological counterpart of the ontological statement that there are no invariances in physics?

  • Ontology. There is no single thing that every mind agrees to.

  • Epistemology. There is no single statement that every body obeys to.

Sounds outrageous, right? How come there be no statement that is universally true? Truth is absolute in logic, but relative in physics. We are not allowed to make any universal statements in physics, no matter how trivial.

nonsensically high valuation of uber

Uber will be the single largest value collapse in technology history. Here are the reasons why:

  • The service is a commodity. Users do not care about which driver or car picks them up as long as the driver is not crazy and the car is not filthy. (It is hard to preserve even such basic qualities at large scale. Generally speaking, you can not perform above average when you become almost as big as the market itself. This in turn increases your insurance cost per transaction.)

  • The technology is a commodity. It is no longer hard to build the basic application from scratch. (Even municipalities have started doing it themselves.)

  • Neither drivers or users are loyal to the company. Uber is fundamentally a utility app with very low switching costs. A lot of drivers and users utilize the rival apps as well. (Driver are looking for more rides and users are looking for cheaper prices.) In fact, there are even aggregator apps that help drivers juggle more easily between the different networks.

  • It is not a winner-takes-all market as it was imagined. Any network that is dense enough so that the average waiting time for the user is below 5 minutes is good enough. Killing competitors through predatory pricing does not change the basic market structure. If the market allows an oligopolistic structure, it will sooner or later (i.e. once Uber runs out of all the stupid money in the world to finance every ride) converge on one.

  • Unit economics is not improving. (In fact, as mentioned above, insurance cost per transaction is getting worse.) Rides do not scale since most of the costs are variable. (Cars are owned by the drivers and efficiency gains from their greater utilization quickly maxes out, especially since they are used for personal purposes as well. So there is not much for Uber to suck away.) The underlying (evil) hope is that once Uber becomes a monopoly, it will be able to relax and dictate prices. This is a false hope however since governments do not tolerate in-your-face physical monopolies, especially if they create negative externalities like luring people away from public transportation and increasing congestion. (They seem to be more lenient with abstract digital ones.)

  • With the arrival of autonomous cars, the whole industry will change. Any gains from building a driver network will be gone, making it easier to launch Uber-like services with sheer capital. (Make no mistake, there will be a LOT of new capital coming in. Germans will hit especially hard, old money will form alliances etc.) Autonomy itself will become quickly commoditized and centralized around a few intermediary technology companies who will be training all the models and centralizing all the data. Also, at some point, (as they do in the hospitality industry) users will start placing greater value on the consistency of quality. Hence, instead of riding in random cars, they will prefer to become members of the fleets owned by car manufacturers themselves. God knows what else will happen… Autonomy will be big disruptive wave with a lot of currently-unforeseeable consequences.

Nevertheless do not short-sell Uber when it goes IPO this year. It is backed by an aggressive giant called SoftBank which is ruled by an old man who is backed by an infinite amount of blood money. As Keynes said, markets can remain irrational longer than you can remain solvent, and it will for surely be the case for Uber.