illusion of individuality

Every new emergent layer in evolution is built out of the previous one. In other words, living entities are like matryoshka dolls, made of layers and layers of living entities inside each other. (Yes, I believe that even atoms are in some sense alive. See the post Emergence of Life for more details.) This does not mean that a newly emerging layer preserves the layer below as is. On the contrary, it modifies the entities that it is generated out of in a significant way, just like technology is modifying us today by slowly automating all the recurring external human patterns away.

Here, the qualifiers recurring and external are important, because they also happen to define exactly the domain of science. What is unique (not recurring) or subjective (not external) can not be studied by science, and therefore can not be automated by technology.

As technology unfolds, it slowly exposes our true human core (i.e. what is unique and subjective), which is actually the only thing it will need for its steady-state survival at maturity. We should not fight against this trend. On the contrary, we should embrace and accelerate it by increasing our social flexibility. True, we may be losing jobs in droves, but in the long run technology makes all of us wealthier and healthier. (There is a lot of politics involved here of course, but you get what I mean. Just compare the current living standards to the livings standards a few hundred years ago.)

Remember, we are what animates technology and makes it adaptive. In other words, artificial general intelligence is already here. It is operating at a global scale through the multi-cloud layer and is composed of myriad of artificial (special) intelligences which are communicating through us. The dynamics is no different than your own mind being a society of smaller minds and your own genome being a society of smaller genomes.

The magic glue is always in the network. Smartness is always an emergent, social affair. In other words, there is no such thing as a general intelligence composed of a single node. Yes, we will have super intelligent robots in the future, but the prime source of their intelligence will always be the global multi-cloud layer. In other words, they will continuously tap into the entirety of our accumulated wisdom, which will keep evolving in the background.

Of course, the supreme complexity will be deftly hidden away, and it will look as if the entire intelligence resides within the individual robots themselves. The irony is that the robots too will believe in this illusion, just as we tend to mistakenly equate our minds with our consciousnesses. Remember, it took us thousands of years to even notice the bare existence of the unconscious. Even today we have no clue with regards to its structure, although deep down we all feel that it somehow links us together in a mysterious fashion.

“We are like islands in the sea, separate on the surface but connected in the deep.”
- William James

This obviously takes us beyond the reach of science and into the territory of metaphysics. But that should not stop us from asking some fun questions!

  • Robot unconscious taps into the electromagnetic field. What field does the human unconscious tap into? Is vacuum not what we think it is?

  • Information is encoded into the electromagnetic field by the collective human consciousness. Whose collective consciousness is encoding information into this other field? Are cells not what we think they are?

public intellectual vs academic intellectual

Does academia have a monopoly over the world of ideas? Does an intellectual need to be an academician to be taken seriously?

The answer to both of these questions is negative. One reason is due to a trend taking place outside academia and the other reason is due to a trend taking place inside academia.

  • Outside. Thanks to the rise of the digital technologies, it has become dramatically easier to access and distribute information. You do not need to be affiliated with any university to participate in high quality lectures, freely access any journal or book, and exchange ideas.

  • Inside. Einstein considered Goethe to be “the last man in the world to know everything.” Today academia has become so specialized that most academicians have no clue even what their next-door colleagues are working on. This had the side-effect of pushing public intellectuals, and therefore a portion of intellectual activity, outside academia.

I have written a lot about the rise of the digital before. In this post I will be focusing on the second point.

Many of you probably do not even know what it means to be a public intellectual. Don’t worry, I did not neither. After all, we have all gone through the same indoctrination during our education, subtly instilling in us the belief that academia has a monopoly over the world of ideas, and that the only true intellectuals are those residing within it.

Before we start, note that the trends mentioned above are not some short-term phenomena. They are both reflections of metaphysical principles that govern evolution of information, and have nothing to do with us whatsoever.

  • First trend is unstoppable because information wants to be free.

  • Second trend is unstoppable because information wants to proliferate.


A Personal Note

A few readers asked me why I have not considered pursuing an academic career. I actually did, and by doing so, learned the hard way that academia is a suffocating place for people like me, who would rather expand their range than increase their depth.

This is the main reason why I wanted to write this piece. I am pretty sure that there are young folks out there, going through similar dilemmas, burning with intellectual energy but also suffering from extreme discomfort in their educational environments. They should not go through the same pains to realize that the modern university has turned into a cult of experts.

The division of labor is the very organizational principle of the university. Unless that principle is respected, the university simply fails to be itself. The pressure, therefore, is constant and massive to suppress random curiosity and foster, instead, only a carefully channeled, disciplined curiosity. Because of this, many who set out, brave and cocky, to take academe as a base for their larger, less programmed intellectual activity, who are confident that they can be in academe but not of it, succumb to its culture over time.

… It takes years of disciplined preparation to become an academic. It takes years of undisciplined preparation to become an intellectual. For a great many academics, the impulse to break free, to run wild, simply comes too late for effective realization.

Jack Miles - Three Differences Between an Academic and an Intellectual

There is of course nothing wrong with developing a deep expertise in a narrow subject. But societies need the opposite type of intellectuals as well, for a variety of reasons which will be very clear by the end of this post.

When I look back in time to see what type of works had the greatest impact on my life, the pattern is very clear. Without any exception, all such works were produced by public intellectuals with great range and tremendous communication skills. In fact, if I knew I was going to be stranded on a desert island, I would not even bring a single book by an academic intellectual. (Of course, without the inputs of hundreds of specialists, there would not be anything to synthesize for the generalist. Nevertheless it is the synthesis people prefer to carry in their minds at all times, not the original inputs.)

This post is a tribute to the likes of David Brooks (Sociology), Noam Chomsky (Politics), Nassim Nicholas Taleb (Finance), Kevin Kelly (Technology), Ken Wilber (Philosophy), Paul Davies (Physics) and Lynn Margulis (Biology). Thank you for being such great sources of inspiration.

Anyway, enough on the personal stuff. Let us now start our analysis.

We will cycle through five different characterizations, presenting public intellectuals as

  • Amorphous Dilettantes,

  • Superhuman Aspirants,

  • Obsessive Generalists,

  • Metaphor Artists, and

  • Spiritual Leaders.

Thereby, we will see how they

  • enhance our social adaptability,

  • push our individual evolutionary limits,

  • help science progress,

  • communicate us the big picture, and

  • lead us in the right direction.


Public Intellectuals as Amorphous Dilettantes
Enhancing Our Social Adaptability

Every learning curve faces diminishing returns. So why become an expert at all? Why not just suffice with 80 percent competence? Just extract the gist of the subject and then move onto the next. Many fields are so complex that they are not open to complete mastery anyway.

Also, the world is such a rich place. Why blindly commit yourself to a single aspect of it? Monolithic ambitions are irrational.

Yes, it may be the experts who do the actual work to carry the society to greater heights. But while doing so, they end up failing to elevate themselves high enough to see the progress at large. That voyeuristic pleasure belongs only to the dilettantes.

Dilettantes are jacks of all trades, and their amorphousness is their asset.

  • They are very useful in resource stricken and fast changing environments like an early-stage startup which faces an extremely diverse set of challenges with a very limited hiring budget. Just like stem cells, dilettantes can specialize on demand and then revert back to their initial general state when there are enough resources to replace them with experts. (Good dilettantes do not multi-task. They serially focus on different things.)

  • They can act as the weak links inside innovation networks and thereby lubricate into existence greater number of multidisciplinary efforts and serendipities. Just like people conversant in many languages, they can act as translators and unify otherwise disparate groups.

  • They are like wild bacteria that can survive freely on their own at the outer edges of humanity. An expert, on the other hand, can function only within a greater cooperative network. Thus, evolution can always fall back on the wild types if the environment changes at a breakneck speed and destroys all such networks.

It is a pity that the status of dilettantes plummeted in modern age whose characteristic collective flexibility enabled more efficient deployment of experts. After all, as humans, we did not win the evolutionary game because we are the fastest or the strongest. We won because we were overall better than average, because we were versatile and better at adaptation. In other words, we won because we were true dilettantes.

Every 26 million years, more or less, there has been an environmental catastrophe severe enough to put down the mighty from their seat and to exalt the humble and meek. Creatures which were too successful in adapting themselves to a stable environment were doomed to perish when the environment suddenly changed. Creatures which were unspecialized and opportunistic in their habits had a better chance when Doomsday struck. We humans are perhaps the most unspecialized and the most opportunistic of all existing species. We thrive on ice ages and environmental catastrophes. Comet showers must have been one of the major forces that drove our evolution and made us what we are.

Freeman Dyson - Infinite in All Directions (Page 32)

Similarly, only generalist birds like robins can survive in our most urbanized locations. Super-dynamic environments always weed out the specialists.


Public Intellectuals as Superhuman Aspirants
Pushing Our Individual Evolutionary Limits

Humans were enormously successful because, in some sense, they contained a little bit of every animal. Their instincts were literally a synthesis.

Now what is really the truth about these soul qualities of humans and animals? With humans we find that they can really possess all qualities, or at least the sum of all the qualities that the animals have between them (each possessing a different one). Humans have a little of each one. They are not as majestic as the lion, but they have something of majesty within them. They are not as cruel as the tiger but they have a certain cruelty. They are not as patient as the sheep, but they have some patience. They are not as lazy as the donkey—at least everybody is not—but they have some of this laziness in them. All human beings have these things within them. When we think of this matter in the right way we can say that human beings have within them the lion-nature, sheep-nature, tiger-nature, and donkey-nature. They bear all these within them, but harmonized. All the qualities tone each other down, as it were, and the human being is the harmonious flowing together, or, to put it more academically, the synthesis of all the different soul qualities that the animal possesses.

Rudolf Steiner - Kingdom of Childhood (Page 43)

Now, just as animals can be viewed as “special instances” of humans, we can view humans as special instances of what a dilettante secretly aspires to become, namely a superhuman.

Humans minds could integrate the instinctive (unconscious) aspects of all animal minds, thanks to the evolutionary budding of a superstructure called the consciousness, which allowed them to specialize their general purpose unconsciousness into any form necessitated by the changing circumstances.

Dilettantes try to take this synthesis to the next level, and aim to integrate the rationalistic (conscious) aspects of all human minds. Of course, they utterly fail at this task since they lack the next-level superstructure necessary to control a general purpose consciousness. Nevertheless they try and try, in an incorrigibly romantic fashion. I guess some do it just for the sake of a few precious voyeuristic glimpses of what it feels to be a superhuman.

Note that, it will be the silicon-based life - not us - who will complete the next cycle of differentiation-integration in the grand narrative of evolution. As I said before, our society is getting better at deploying experts wherever they are needed. This increased fluidity of labor is entirely due to the technological developments which enable us to more efficiently govern ourselves. What is emerging is a superconsciousness that is coordinating our consciousnesses, and pushing us in the direction of a single unified global government.

“Opte Project visualization of routing paths through a portion of the Internet. The connections and pathways of the internet could be seen as the pathways of neurons and synapses in a global brain” - Wikipedia

“Opte Project visualization of routing paths through a portion of the Internet. The connections and pathways of the internet could be seen as the pathways of neurons and synapses in a global brain” - Wikipedia

Nevertheless there are advantages to internalizing portions of the hive mind. Collaboration outside can never fully duplicate the effects of collaboration within. As a general rule, closer the “neurons”, better the integration. (The “neuron” could be an entire human being or an actual neuron in the brain.)

Individual creators started out with lower innovativeness than teams - they were less likely to produce a smash hit - but as their experience broadened they actually surpassed teams: an individual creator who had worked in four or more genres was more innovative than a team whose members had collective experience across the same number of genres.

David Epstein - Range (Pages 209-210)

Notice that there is a pathological dimension to the superhuman aspiration, aside from the obvious narcissistic undertones. As one engulfs more of the hive mind, one inevitably ends up swallowing polar opposite profiles.

“The wisest human being would be the richest in contradictions, who has, as it were, antennae for all kinds of human beings - and in the midst of this his great moments of grand harmony.”

- Friedrich Nietzsche

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.”

- F. Scott Fitzgerald

In a sense, reality is driven by insanity. It owes its “harmony” and dynamism to the embracing of the contradictory tensions created by dualities. We, on the other hand, feel a psychological pressure to choose sides and break the dualities within our social texture. Instead of expanding our consciousness horizontally, we choose to contract it to maintain consistency and sanity.

“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.”

- Ralph Waldo Emerson

“Do I contradict myself? Very well. Then I contradict myself. I am large. I contain multitudes.”

- Walt Whitman

Recall that humans are an instinctual synthesis of the entire animal kingdom. This means that, while we strive for consistency at a rational level, we are often completely inconsistent at an emotional level, roaming wildly around the whole spectrum of possibilities. In other words, from the perspective of an animal, we probably look utterly insane, since it can not tell that there is actually a logic to this insanity that is internally controlled by a superstructure.

“A human being is that insane animal whose insanity has invented reason.”

- Cornelius Castoriadis

Public Intellectuals as Obsessive Generalists
Helping Science Progress

If a specialist is someone who knows more and more about less and less, a generalist is unapologetically someone who knows less and less about more and more. Both forms of knowledge are genuine and legitimate. Someone who acquires a great deal of knowledge about one field grows in knowledge, but so does someone who acquires a little knowledge about many fields. Knowing more and more about less and less tends to breed confidence. Knowing less and less about more and more tends to breed humility.

Jack Miles - Three Differences Between an Academic and an Intellectual


The difference between science and philosophy is that the scientist learns more and more about less and less until she knows everything about nothing, whereas a philosopher learns less and less about more and more until he knows nothing about everything.

Dorion Sagan - Cosmic Apprentice (Page 2)

What separates good public intellectuals from bad ones is that the good have a compass which guide them while they are sailing through the infinite sea of knowledge. Those without a compass do not at all display any humility. Instead, they suffer from gluttony, which is an equally deadly sin as pride, which plagues the bad academic intellectuals whose expertise-driven egos easily spill over to areas they have no competence in.

The compass I am talking about is analogical reasoning, the kind of reasoning needed for connecting the tapestry of knowledge. Good public intellectuals try to understand the whole geography rather than wonder around mindlessly like a tourist. They have a pragmatic goal in mind, which is to understand the mind of God. They venture horizontally in order to lift themselves up to a higher plateau by discovering frameworks that apply to several subject areas at once.

By definition, one can not generalize if one is stuck inside a single silo of knowledge. But jumping around too many silos does not help neither. Good public intellectuals dig deep enough into a subject area to bring their intuition to a level that is sufficient to make the necessary outside connections. Bad ones spread themselves too thin, and eventually become victims of gluttony.

As I explained in a previous blog post, science progresses via successful unifications. Banishing of generalists from the academia therefore had the effect of slowing down science by drowning it in complete incrementalism. In the language of Freeman Dyson, today, academia is breeding only “frogs”.

Birds fly high in the air and survey broad vistas of mathematics out to the far horizon. They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and see only the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time.

Freeman Dyson - Birds and Frogs (Page 37)

Without the “birds” doing their synthesizing and abstracting, we can not see where the larger paradigm is evolving towards, and without this higher level map, we can not accelerate the right exploratory paths or cut off the wrong ones. More importantly, losing sight of the unity of knowledge creates an existential lackluster that sooner or later wears off everyone involved in pursuit of knowledge, including the academic intellectuals.

Consciousness discriminates, judges, analyzes, and emphasizes the contradictions. It's necessary work up to a point. But analysis kills and synthesis brings back to life. We must find out how to get everything back into connection with everything else.

- Carl Gustav Jung, as quoted in The Earth Has a Soul (Page 209)

True, academic intellectuals are occasionally allowed to engage in generalization, but they are forbidden from obsessing too much about it and venturing too far away from their expertise area. This prevents them from making fresh connections that could unlock their long-standing problems. That is why most paradigm shifts in science and technology are initiated by outsiders who can bring in brand new analogies to the field. (Generalists are also great at taming the excessive enthusiasm of specialists who often over-promote the few things that they are so personally invested in.) For instance, both Descartes and Darwin were revolutionaries who addressed directly (and eloquently) to the general public, without any university affiliations.

Big picture generalities are also exactly what the public cares about:

There are those who think that an academic who sometimes writes for a popular audience becomes a generalist on those occasions, but this is a mistaken view. A specialist may make do as a popularizer by deploying his specialized education with a facile style. A generalist must write from the full breadth of a general education that has not ended at graduation or been confined to a discipline. If I may judge from my ten years' experience in book publishing, what the average humanities academic produces when s/he sets out to write for "the larger audience" is a popularizer's restatement of specialized knowledge, while what the larger audience responds to is something quite different: It is specialized knowledge sharply reconceptualized and resituated in an enlarged context.

Jack Miles - Three Differences Between an Academic and an Intellectual

While nitty gritty details change all the time, the big picture evolves very slowly. (This is related to the fact that it becomes harder to say new things as one moves higher up in generality.) Hence the number of good public intellectuals needed by the society is actually not that great. But finding and nurturing one is not easy, for the same reason why finding and nurturing a potential leader is not easy.

Impostors are another problem. While bad academic intellectuals are quickly weeded out by their community, bad public intellectuals are not, because they do not form a true community. Their ultimate judge is public, whose quality determines the quality of who becomes popular, in a fashion that is not too dissimilar to how the quality of leaders correlates with the quality of followers.


Public Intellectuals as Metaphor Artists
Communicating Us the Big Picture

As discussed in a previous blog post, generalizations happen through analogies and result in further abstraction. Metaphors, on the other hand, result in further concretization through the projection of the familiar onto the unfamiliar. That is why they are such great tools for communication, and why it is often pedagogically necessary to follow a generalization up with a metaphor to ground the abstract in the familiar.

While academic intellectuals write for each other, a public intellectual writes for the greater public and therefore has no choice but to employ spot-on metaphors to deliver his message. He is lucky in the sense that, compared to the academic intellectual, he has knowledge of many more fields and therefore enjoys a larger metaphor reservoir.

Bad academic intellectuals mistake depth with obscurity, as if something expressed with clarity can not be of any significance. They are often proud of being understood by only a few other people, and invent unnecessary jargon to keep the generalists at bay, and to create an air of originality. (Of course, an extra bit of jargon is inevitable, since as one zooms in, more phenomena become distinguishable and worth attaching new names.)

The third difference between an intellectual and an academic is the relative attachment of each to writing as a fine rather than a merely practical art. "If you happen to write well," Gustave Flaubert once wrote, "you are accused of lacking ideas."

… An academic is concerned with substance and suspicious of style, while an intellectual is suspicious of any substance that purports to transcend or defy style.

Jack Miles - Three Differences Between an Academic and an Intellectual

While academic intellectuals obsess about discovery and originality, public intellectuals obsess about delivery and clarity.

  • Academic intellectuals worry a lot about attaching their names to new ideas. So, in some sense, it is natural for them to lack lucidity. After all, it takes a long time for a new born idea to mature and find its right spot in the grand tapestry of knowledge.

“To make a discovery is not necessarily the same as to understand a discovery.”

- Abraham Pais

It is also not surprising for professors to prefer to teach from (and refer to) the original texts rather than the more clear secondary literature. Despite the fact that only a minuscule number of students end up staying in academia, professors design their courses as if the goal is to train future professors who, like themselves, will value originality over clarity. Students are asked to trace all ideas back to their originators, and are given the implicit guarantee that they too will be treated with the same respect if they successfully climb the greasy pole.

It is actually quite important for a future academician to witness the chaotic process behind an idea’s birth (inside a single mind) and its subsequent maturation (out in the community). In formalistic subjects like mathematics and physics, where ideas reach their peak clarity at a much faster speed, the pedagogical pressure to choose the conceptual route (rather than the historical route) for teaching is great. So the students end up reading only the most polished material, never referring back to the original papers which contain at least some traces of battle scars. They are accelerated to the research frontier, but with much less of an idea about what it actually means to be at the frontier. Many, expecting a clean-cut experience, leave academia disillusioned.

  • Public intellectuals do not get their names attached to certain specific discoveries. Their main innovation lies in building powerful bridges and coining beautiful metaphors, and ironically, the better they are, the more quickly they lose ownership over their creations.

Effective metaphors tend to be easily remembered and transmitted. This is, in fact, what enables them to become clichés.
 
James Geary - I is an Other (Page 122)

Hence, while academic intellectuals are more like for-profit companies engaged in extractable value creation, public intellectuals are more like non-profit companies engaged in diffused value creation. They inspire new discoveries rather than make new discoveries themselves. In other words, they are more like artists, who enrich our lives in all sorts of immeasurable ways, and get paid practically nothing in return.

All ideas, including those generated by academic intellectuals, either eventually die out, or pass the test of time and prove to be so foundational that they reach their final state of maturity by becoming totally anonymized. Information wants to be free, not just in the sense of being accessible, but also in the sense of breaking the chains tied to its originator. No intellectual can escape this fact. For public intellectuals, the anonymization process happens much faster, because the public does not really care much about who originated what. What about the public intellectuals themselves, do they really care? Well, good ones do not, because their main calling has always been public impact (rather than private gain) anyway.

The dichotomy between those who obsess about “discovery and originality” and those who obsess about “delivery and clarity” has been very eloquently characterized by Rota within the sphere of mathematics, as the dichotomy between problem solvers and theorizers:

To the problem solver, the supreme achievement in mathematics is the solution to a problem that had been given up as hopeless. It matters little that the solution may be clumsy; all that counts is that it should be the first and that the proof be correct. Once the problem solver finds the solution, he will permanently lose interest in it, and will listen to new and simplified proofs with an air of condescension suffused with boredom.

The problem solver is a conservative at heart. For him, mathematics consists of a sequence of challenges to be met, an obstacle course of problems. The mathematical concepts required to state mathematical problems are tacitly assumed to be eternal and immutable.

... To the theorizer, the supreme achievement of mathematics is a theory that sheds sudden light on some incomprehensible phenomenon. Success in mathematics does not lie in solving problems but in their trivialization. The moment of glory comes with the discovery of a new theory that does not solve any of the old problems but renders them irrelevant.

The theorizer is a revolutionary at heart. Mathematical concepts received from the past are regarded as imperfect instances of more general ones yet to be discovered. Mathematical exposition is considered a more difficult undertaking than mathematical research.

Gian-Carlo Rota - Problem Solvers and Theorizers

Public Intellectuals as Spiritual Leaders
Leading Us in the Right Direction

Question: Who are our greatest metaphor artists?
Answer: Our spiritual leaders, of course.

Reading sacred texts too literally is a common rookie mistake. They are the most metaphor-dense texts produced by human beings, and this vagueness is a feature, not a bug.

  • Longevity. Thanks to their deliberately vague language, these texts have much higher chances of survival by being open to continuous re-interpretation through generations.

  • Mobilization. Metaphors are politically subversive devices, useful for crafting simple illuminating narrations that can mobilize masses.

“A good metaphor is something even the police should keep an eye on."

- Georg Christoph Lichtenberg

  • Charisma. Imagine a sacred text written like a dry academic paper, referring to other authors for trivially-obvious facts and over-contextualizing minute shit. Who would be galvanized by that? Nobody of course. Charismatic people anonymize mercilessly, and both fly high and employ plenty of metaphors.

Question: Who are our most obsessive generalists?
Answer: Again, our spiritual leaders.

Spiritual people care about the big picture, literally the biggest picture. They want to probe the mind of God, and as we explained in a previous post, the only way to do that is through generalizations. This quest for generalization is essentially what makes spiritual leaders so humble, visionary and wise.

  • Humble. It suffices to recall the second Jack Miles quote: “Knowing more and more about less and less tends to breed confidence. Knowing less and less about more and more tends to breed humility.”

  • Visionary. Morgan Housel says that “the further back in history you look, the more general your takeaways should be.” I agree a hundred percent. In fact, the dual statement is also correct: The further you venture into the future, the more general your predictions should be. In other words, the only way to venture into far future is by looking at big historical patterns and transforming general takeaways into general predictions. That is why successful visionaries and paradigm shifters are all generalists. (There is now an entire genre of academicians trying to grasp why academicians are so bad at long-term forecasts. In a nutshell, experts beat generalists in short-term forecasting through incorporation of domain-specific insights, but this advantage turns into a disadvantage when it comes to making long-term forecasts because, in the long run, no domain can be causally isolated from another.)

Kuhn shows that when a scientific revolution is occurring, books describing the new paradigm are often addressed to anyone who may be interested. They tend to be clearly written and jargon free, like Darwin's Origin of Species. But once the revolution becomes mainstream, a new kind of scientist emerges. These scientists work on problems and puzzles within the new paradigm they inherit. They don't generally write books but rather journal articles, and because they communicate largely with one another, a specialized jargon develops so that even colleagues in adjacent fields cannot easily understand them. Eventually the new paradigm becomes the new status quo.

Norman Doidge - The Brain’s Way of Healing (Page 354)

  • Wise. The dichotomy between academic and public intellectuals mirrors the dichotomy between genius and wisdom. Sudden flashes of insight always help, but there is no short-cut to the big picture. You need to accumulate a ton of experience across different aspects of life. Academic culture, on the other hand, is genius-driven and revolves around solving specific hard technical problems. That is why academic intellectuals get worse as they age, while public intellectuals get better. This, by the way, poses a huge problem for the future of academia:

As our knowledge deepens and widens, so it will take longer to reach a frontier. This situation can be combated only by increased specialization, so that a progressively smaller part of the frontier is aimed at, or by lengthening the period of training and apprenticeship. Neither option is entirely satisfactory. Increased specialization fragments our understanding of the Universe. Increased periods of preliminary training are likely to put off many creative individuals from embarking upon such a long path with no sure outcome. After all, by the time you discover that you are not a successful researcher, it may be too late to enter many other professions. More serious still, is the possibility that the early creative period of a scientists life will be passed by the time he or she has digested what is known and arrived at the research frontier.

John D. Barrow - Impossibility (Page 108)

Question: Who are our best superhuman aspirants?
Answer: Yet again, our spiritual leaders.

I guess this answer requires no further justification since most people treat their spiritual leaders as superhumans anyway. But do they treat them in the same sense as we have defined the term? Now that is good question!

Remember, we had defined superhuman as an entity possessing a superconsciousness that can specialize a general purpose consciousness into any form necessitated by the changing circumstances. In other words, a superhuman can simulate any form of human consciousness on demand. According to Carl Gustav Jung, Christ was close to such an idealization.

For Jung, Christianity represented a necessary stage in the evolution in consciousness, because the divine image of Christ represented a more unified image of the autonomous human self than did the multiplicity of earlier pagan divinities.

David Fideler - Restoring the Soul of the World (Page 79)

Jesus also seems to have transcended the social norms of his times, and showcased the typical signs of insanity that comes with the territory, due to the internalization of too much multiplicity in the psychic domain.

… all great spiritual teachers, including Jesus and Buddha, challenged social norms in ways that could have been judged insane. Throughout the history of spirituality, moreover, some spiritual adepts have acted in especially unconventional, even shocking ways. This behavior is called holy madness, or crazy wisdom.

Although generally associated with Hinduism and Buddhism, crazy wisdom has cropped up in Western faiths, too. After Saul became Saint Paul, he preached that a true Christian must “become a fool that he may become wise.” Paul’s words inspired a Christian sect called Fools for Christ’s Sake, members of which lived as homeless and sometimes naked nomads.

John Horgan - Rational Mysticism (Page 53)

Was Jesus some sort of an early imperfect carbon-based version of the newly emerging silicon-based hive mind? A bizarre question indeed! But what is clear is that, any superhuman we can create out of flesh, no matter how imperfect, is our best hope for disciplining the global technological layer that is now emerging all over us and controlling us to the point of suffocation.

Technology is a double-edged sword with positive and negative aspects.

  • Positive. Gives prosperity. Increases creative capabilities.

  • Negative. Takes away freedom. Increases destructive capabilities.

What is strange is that we are not allowed to stop its progression. (This directionality is a specific manifestation of the general directionality of evolution towards greater complexity.) There are two main reasons.

  • Local Reason. If you choose not to develop technology yourself, then someone else will, and that someone else will eventually choose to use its newly discovered destructive capabilities on you to engulf you.

  • Global Reason. Even if we somehow manage to stop developing technology in a coordinated fashion, we will eventually be punished for this decision when we get hit by the next cosmic catastrophe and perish like the dinosaurs for not building the right defensive measures.

So we basically need to balance power with control. And, just as all legal frameworks rest on moral ones, all forms of self-governance ultimately rest upon spiritual foundations. As pointed out in an earlier post, technocratic leadership alone will eventually drive us towards self-destruction.

Today, what we desperately need is a new generation of spiritual leaders who can integrate us a new big-picture mythology, conforming to the latest findings of science. (Remember, as explained in an earlier post, science helps religion to discover its inner core by both limiting the domain of exploration and increasing the efficacy of exploration.) Only such a mythology can convince the new breed of meritocratic elites to discipline themselves and keep tabs on our machines, and galvanize the necessary public support to give these elites sufficient breathing room to tackle the difficult challenges.

Of course, technocratic leadership is exactly what academic intellectuals empower and spiritual leadership is exactly what public intellectuals stand for. (Technocratic leaders may be physically distant, operating from far away secluded buildings, but they are actually very easy to relate to on a mental level. Spiritual leaders on the other hand are physically very close, leading from the ground so to speak, but they are operating from such an advanced mental level that they are actually very hard to relate to. That is why good spiritual leaders are trusted while good technocratic leaders are respected.)

As technology progresses and automates more and more capabilities away from us, the chasm between the two types of intellectuals will widen.

  • Machines have already become quite adept at vertical thinking and have started eating into the lower extremities of the knowledge tree, forcing the specialists (i.e. academic intellectuals) to collaborate with them. (Empowerment by the machines is partially ameliorating the age problem we talked about.) Although machines look like tools at the moment, they will eventually become the dominant partner, making their human partners strive more and more to preserve their relevancy.

  • Despite being highly adaptable dilettantes, public intellectuals are not safe neither. As the machines become more adept at lateral thinking, they will feel pressure from below, just as academic counterparts are feeling pressure from above.

Of course, our entire labor force (not only the intellectuals) will undergo the same polarization process and thereby split into two discrete camps with a frantic and continually diminishing gray zone in between:

  • Super generalists who are extremely fluid.

  • Super specialists who are extremely expendable.

This distinction is analogous to the distinction between generalized stem cells and specialized body cells, who are not even allowed to replicate.

“The spread of computers and the Internet will put jobs in two categories. People who tell computers what to do, and people who are told by computers what to do.”

- Marc Andreessen

In a sense, Karl Marx (who thought economic progress would allow everyone to be a generalist) and Herbert Spencer (who thought economic progress would force everyone to become a specialist) were both partially right.

We need generalist leaders with range to exert control and point us (and increasingly our machines) in the right direction, and we need specialist workers with depth to generate growth and do the actual work. Breaking this complimentary balance, by letting academic intellectuals take over the world of ideas and technocratic leaders take over the world of action, amounts to being on a sure path to extinction via a slow loss of fluidity and direction.

analogies vs metaphors

“The existence of analogies between central features of various theories implies the existence of a general abstract theory which underlies the particular theories and unifies them with respect to those central features.”
- Eliakim Hastings Moore

Conceptual similarities manifest themselves as analogies, where one recognizes that two structures X and Y have a common meaningful core, say A, which can be pulled up to a higher level. The resulting relationship is symmetric in the sense that the structure A specializes to both X and Y. In other words, one can say either “X is like Y via A” or “Y is like X via A”.

Analogy.png

The analogy get codified in the more general structure A which in turn is mapped back onto X and Y. (I say “onto” because A represents a bigger set than both X and Y.) Discovering A is revelatory in the sense that one recognizes that X and Y are special instances of a more general phenomenon, not disparate structures.

Metaphors play a similar role as analogies. They too increase our total understanding, but unlike analogies, they are not symmetric in nature.

Say there are two structures X and Y where is X is more complex but also more familiar than Y. (In practice, X often happens to be an object we have an intuitive grasp of due to repeated daily interaction.) Discovering a metaphor, say M, involves finding a way of mapping X onto Y. (I say “onto” because X - via M - ends up subsuming Y inside its greater complexity.)

Metaphor.png

The explanatory effect comes from M pulling Y up to the familiar territory of X. All of a sudden, in an almost magical fashion, Y too starts to feel intuitive. Many paradigm shifts in the history of science were due to such discrete jumps. (e.g. Maxwell characterizing the electromagnetic field as a collection of wheels, pulleys and fluids.)

Notice that you want your analogy A to be as faithful as possible, capturing as many essential features of X and Y. If you generalize too much, you will end up with a useless A with no substance. Similarly, for each given Y, you want your metaphor pair (X,M) to be as tight as possible, while not letting X stray away from the domain of the familiar.

You may be wondering what happens if we dualize our approaches in the above two schemes.

  • Analogies. Instead of trying to rise above the pair (X,Y), why not try to go below it? In other words, why not consider specializations that both X and Y map onto, rather than focus on generalizations that map onto X and Y?

  • Metaphors. Instead of trying to approach Y from above, why not try approach it from below? In other words, why not consider metaphors that map the simple into the complex rather than focus on those that map the complex onto the simple?

The answer to both questions is the same: We do not, because the dual constructions do not require any ingenuity, and even if they turn out to be very fruitful, the outcomes do not illuminate the original inputs.

Let me expand on what I mean.

  • Analogies enhance our analytic understanding of the world of ideas. They are tools of the consciousness, which can not deal with the concrete (specialized) concepts head on. For instance, since it is insanely hard to study integers directly, we abstract and study more general concepts such as commutative rings instead. (Even then the challenge is huge. You could devote your whole life to ring theory and still die as confused as a beginner.)

    In the world of ideas, one can easily create more specialized concepts by taking conjunctions of various X’s and Y’s. Studying such concepts may turn out to be very fruitful indeed, but it does not further our understanding of the original X’s and Y’s. For instance, study of Lie Groups is exceptionally interesting, but it does not further our understanding of manifolds or groups.

  • Metaphors enhance our intuitive understanding of the world of things. They are tools of the unconsciousness, which is familiar with what is more immediate, and what is more immediate also happens to be what is more complex. Instruments allow us to probe what is remote from experience, namely the small and the big, and both turn out to be stranger but also simpler than the familiar stuff we encounter in our immediate daily lives.

    • What is smaller than us is simpler because it emerged earlier in the evolutionary history. (Compare atoms and cells to humans.)

    • What is bigger than us is simpler because it is an inanimate aggregate rather than an emergent life. (Those galaxies may be impressive, but their complexity pales in comparison to ours.)

    In the world of things, it is easy to come up with metaphors that map the simple into the complex. For instance, with every new technological paradigm shift, we go back to biology (whose complexity is way beyond anything else) and attack it with the brand new metaphor of the emerging Zeitgeist. During the industrial revolution we conceived the brain as a hydraulic system, which in retrospect sounds extremely naive. Now, during the digital revolution, we are conceiving it as - surprise, surprise - a computational system. These may be productive endeavors, but the discovery of the trigger metaphors itself is a no-brainer.

Now is a good time to make a few remarks on a perennial mystery, namely the mystery of why metaphors work at all.

It is easy to understand why analogies work since we start off with a pair of concepts (X,Y) and use it as a control while moving methodically upwards towards a general A. In the case of metaphors, however, we start off with a single object Y, and then look for a pair (X,M). Why should such a pair exist at all? I believe the answer lies in a combination of the following two quotes.

"We can so seldom declare what a thing is, except by saying it is something else."
- George Eliot

“Subtle is the Lord, but malicious He is not.”
- Albert Einstein

Remember, when Einstein characterized gravitation as curvature, he did not really tell us what gravity is. He just stated something unfamiliar in terms of something familiar. This is how all understanding works. Yes, science is progressing, but all we are doing is just making a bunch of restatements with no end in sight. Absolute truth is not accessible to us mere mortals.

“Truths are illusions which we have forgotten are illusions — they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.”
- Friedrich Nietzsche

The reason why we can come up with metaphors of any practical significance is because nature subtly keeps recycling the same types of patterns in different places and at different scales. This is what Einstein means when he says that the Lord is not malicious, and is why nature is open to rational inquiry in the first place.

Unsurprisingly, Descartes himself, the founder of rationalism, was also a big believer in the universality of patterns.

Descartes followed this precept by liberal use of scaled-up models of microscopic physical events. He even used dripping wine vats, tennis balls, and walking-sticks to build up his model of how light undergoes refraction. His statement should perhaps also be taken as evidence of his belief in the universality of certain design principles in the machinery of Nature which he expects to reappear in different contexts. A world in which everything is novel would require the invention of a new science to study every phenomenon. It would possess no general laws of Nature; everything would be a law unto itself.

John D. Barrow - Universe That Discovered Itself (Page 107)

Of course, universality does not make it any easier to discover a great metaphor. It still requires a special talent and a trained mind to intuit one out of the vast number of possibilities.

Finding a good metaphor is still more of an art than a science. (Constructing a good analogy, on the other hand, is more of a science than an art.) Perhaps one day computers will be able to completely automate the search process. (Currently, as I pointed out in a previous blog post, they are horrible at horizontal type of thinking, the type of thinking required for spotting metaphors.) This will result in a disintermediation of mathematical models. In other words, computers will simply map reality back onto itself and push us out of the loop altogether.

Let us wrap up all the key observations we made so far in a single table:

analogies vs metaphors.png

Now let us take a brief detour in metaphysics before we have a one last look at the above dichotomy.

Recall the epistemology-ontology duality:

  • An idea is said to be true when every body obeys to it.

  • A thing is said to be real when every mind agrees to it.

This is a slightly different formulation of the good old mind-body duality.

  • Minds are bodies experienced from inside.

  • Bodies are minds experienced from outside.

While minds and bodies are dynamic entities evolving in time, true ideas and real things reside inside a static Platonic world.

  • Minds continuously shuffle through ideas, looking for the true ones, unable to hold onto any for a long time. Nevertheless truth always seems to be within reach, like a carrot dangling in the front.

  • Minds desperately attach names to phenomena, seeking permanency within the constant flux. Whatever they refer to as a real thing eventually turns out to be unstable and ceases to be.

Hence, the dichotomy between true ideas and real things can be thought of as the (static) Being counterpart of the mind-body duality which resides in (dynamic) Becoming. In fact, it would not be inappropriate to call the totality of all true ideas as God-mind and the totality of all real things as God-body.

Anyway, enough metaphysics. Let us now go back to our original discussion.

In order to find a good metaphor, our minds scan through the X’s that we are already experientially familiar with. The hope is to be able to pump up our intuition about a thing through another thing. Analogies on the other hand help us probe the darkness, and bring into light the previously unseen. Finding a good A is like pulling a rabbit out of a hat, pulling something that was out-of-experience into experience. The process looks as follows.

  1. First you encounter a pair of concepts (X,Y) in the shared public domain, literally composed of ink printed upon a paper or pixels lighting up on a screen.

  2. Your mind internalizes (X,Y) by turning it back to an idea form, hopefully in the fashion that was intended by its originator mind.

  3. You generalize (X,Y) to A within the world of ideas through careful reasoning and aesthetic guidance.

  4. You share A with other minds by turning it into a thing, expressed in a certain language, on a certain medium. (An idea put in a communicable form is essentially a thing that can be experienced by all minds.)

  5. End result is a one more useful concept in the shared public domain.

Analogies lift the iceberg, so to speak, by bringing completely novel ideas into existence and revealing more of the God-mind. In fact, the entirety of our technology, including the technology of reasoning via analogies, can be viewed as a tool for accelerating the transformation of ideas into things. We, and other intermediary minds like us, are the means through which God is becoming more and more aware of itself.

Remember, as time progresses, the evolutionary entities (i.e. minds) decrease in number and increase in size and complexity. Eventually, they get

  • so good at modeling the environment that their ideas start to resemble more and more the true ideas of the God-mind, and

  • so good at controlling the environment that they become increasingly indistinguishable from it and the world of things start to acquire a thoroughly mental character.

In the limit, when the revelation of the God-mind is complete, the number of minds finally dwindles down to one, and the One, now synonymous with the God-mind, dispenses with analogies or metaphors altogether.

  • As nothing seems special any more, the need to project the general onto the special ceases.

  • As nothing feels unfamiliar any more, the need to project the familiar onto the unfamiliar ceases.

Of course, this comes at the expense of time stopping altogether. Weird, right? My personal belief is that revelation will never reach actual completion. Life will hover over the freezing edge of permanency for as long as it can, and at some point, will shatter in such a spectacular fashion that it will have to begin from scratch all over again, just as it had done so last time around.

hypothesis vs data driven science

Science progresses in a dualistic fashion. You can either generate a new hypothesis out of existing data and conduct science in a data-driven way, or generate new data for an existing hypothesis and conduct science in a hypothesis-driven way. For instance, when Kepler was looking at the astronomical data sets to come up with his laws of planetary motion, he was doing data-driven science. When Einstein came up with his theory of General Relativity and asked experimenters to verify the theory’s prediction for the anomalous rate of precession of the perihelion of Mercury's orbit, he was doing hypothesis-driven science.

Similarly, technology can be problem-driven (the counterpart of “hypothesis-driven” in science) or tool-driven (the counterpart of “data-driven” in science). When you start with a problem, you look for what kind of (existing or not-yet-existing) tools you can throw at the problem, in what kind of a combination. (This is similar to thinking about what kind of experiments you can do to generate relevant data to support a hypothesis.) Conversely, when you start with a tool, you try to find a use case which you can deploy it at. (This is similar to starting off with a data set and digging around to see what kind of hypotheses you can extract out of it.) Tool-driven technology development is much more risky and stochastic. It is a taboo for most technology companies, since investors do not like random tinkering and prefer funding problems with high potential economic value and entrepreneurs who “know” what they are doing.

Of course, new tools allow you to ask new kind of questions to the existing data sets. Hence, problem-driven technology (by developing new tools) leads to more data-driven science. And this is exactly what is happening now, at a massive scale. With the development of cheap cloud computing (and storage) and deep learning algorithms, scientists are equipped with some very powerful tools to attack old data sets, especially in complex domains like biology.


Higher Levels of Serendipity

One great advantage of data-driven science is that it involves tinkering and “not really knowing what you are doing”. This leads to less biases and more serendipitous connections, and thereby to the discovery of more transformative ideas and hitherto unknown interesting patterns.

Hypothesis-driven science has a direction from the beginning. Hence surprises are hard to come by, unless you have exceptionally creative intuition capabilities. For instance, the theory of General Relativity was based on one such intuition leap by Einstein. (There has not been such a great leap since then. So it is extremely rare.) Quantum Mechanics on the other hand was literally forced by experimental data. It was so counter intuitive that people refused to believe it. All they could do is turn their intuition off and listen to the data.

Previously data sets were not huge, so scientists could literally eye ball them. Today this is no longer possible. That is why now scientists need computers, algorithms and statistical tools to help them decipher new patterns.

Governments do not give money to scientists so that they can tinker around and do whatever they want. So a scientist applying for a grant needs to know what he is doing. This forces everyone to be in a hypothesis-driven mode from the beginning and thereby leads to less transformative ideas in the long run. (Hat tip to Mehmet Toner for this point.)

Science and technology are polar opposite endeavors. Governments funding science like investors fund technology is a major mistake, and also an important reason why today some of the most exciting science is being done inside closed private companies rather than open academic communities.


Less Democratic Landscape

There is another good reason why the best scientists are leaving the academia. You need good quality data to do science within the data-driven paradigm, and since data is so easily monetizable the largest data sets are being generated by the private companies. So it is not surprising that the most cutting edge research in fields like AI is being done inside companies like Google and Facebook, which also provide the necessary compute power to play around with these data sets.

While hypotheses generation gets better when it is conducted in a decentralized open manner, the natural tendency of data is to be centralized under one roof where it can be harmonized and maintained consistently at a high quality. As they say, “data has gravity”. Once you pass certain critical thresholds, data starts generating strong positive feedback effects and thereby attracts even more data. That is why investors love it. Using smart data strategies, technology companies can build a moat around themselves and render their business models a lot more defensible.

In a typical private company, what data scientists do is to throw thousands of different neural networks at some massive internal data sets and simply observe which one gets the job done better. This of course is empiricism in its purest form, not any different than blindly screening millions of compounds during a drug development process. As they say, just throw it against a wall and see if it sticks.

This brings us to a major problem about big-data-driven science.


Lack of Deep Understanding

There is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

Chris Anderson - The End of Theory

We can not understand the complex machine learning models we are building. In fact, we train them the same way one trains a dog. That is why they are called black-box models. For instance, when the stock market experiences a flash crash we blame the algorithms for getting into a stupid loop, but we never really understand why they do so.

Is there any problem with this state of affairs if these models get the job done, make good predictions and (even better) earn us money? Can not scientists adopt the same pragmatic attitude of technologists and focus on results only, and suffice with successful manipulation of nature and leave true understanding aside? Are not the data sizes already too huge for human comprehension anyway? Why do we expect machines to be able to explain their thought processes to us? Perhaps they are the beginnings of the formation of a higher level life form, and we should learn to trust them about the activities they are better at than us?

Perhaps we have been under an illusion all along and our analytical models have never really penetrated that deep in to the nature anyway?

Closed analytic solutions are nice, but they are applicable only for simple configurations of reality. At best, they are toy models of simple systems. Physicists have known for centuries that the three-body problem or three dimensional Navier Stokes do not afford a closed form analytic solutions. This is why all calculations about the movement of planets in our solar system or turbulence in a fluid are all performed by numerical methods using computers.

Carlos E. Perez - The Delusion of Infinite Precision Numbers

Is it a surprise that as our understanding gets more complete, our equations become harder to solve?

To illustrate this point of view, we can recall that as the equations of physics become more fundamental, they become more difficult to solve. Thus the two-body problem of gravity (that of the motion of a binary star) is simple in Newtonian theory, but unsolvable in an exact manner in Einstein’s Theory. One might imagine that if one day the equations of a totally unified field are written, even the one-body problem will no longer have an exact solution!

Laurent Nottale - The Relativity of All Things (Page 305)

It seems like the entire history of science is a progressive approximation to an immense computational complexity via increasingly sophisticated (but nevertheless quiet simplistic) analytical models. This trend obviously is not sustainable. At some point we should perhaps just stop theorizing and let the machines figure out the rest:

In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.”

Natalie Wolchover - Machine Learning’s ‘Amazing’ Ability to Predict Chaos

Statistical approaches like machine learning have often been criticized for being dumb. Noam Chomsky has been especially vocal about this:

You can also collect butterflies and make many observations. If you like butterflies, that's fine; but such work must not be confounded with research, which is concerned to discover explanatory principles.

- Noam Chomsky as quoted in Colorless Green Ideas Learn Furiously

But these criticisms are akin to calling reality itself dumb since what we feed into the statistical models are basically virtualized fragments of reality. Analytical models conjure up abstract epi-phenomena to explain phenomena, while statistical models use phenomena to explain phenomena and turn reality directly onto itself. (The reason why deep learning is so much more effective than its peers among machine learning models is because it is hierarchical, just like the reality is.)

This brings us to the old dichotomy between facts and theories.


Facts vs Theories

Long before the computer scientists came into the scene, there were prominent humanists (and historians) fiercely defending fact against theory.

The ultimate goal would be to grasp that everything in the realm of fact is already theory... Let us not seek for something beyond the phenomena - they themselves are the theory.

- Johann Wolfgang von Goethe

Reality possesses a pyramid-like hierarchical structure. It is governed from the top by a few deep high-level laws, and manifested in its utmost complexity at the lowest phenomenological level. This means that there are two strategies you can employ to model phenomena.

  • Seek the simple. Blow your brains out, discover some deep laws and run simulations that can be mapped back to phenomena.

  • Bend the complexity back onto itself. Labor hard to accumulate enough phenomenological data and let the machines do the rote work.

One approach is not inherently superior to the other, and both are hard in their own ways. Deep theories are hard to find, and good quality facts (data) are hard to collect and curate in large quantities. Similarly, a theory-driven (mathematical) simulation is cheap to set up but expensive to run, while a data-driven (computational) simulation (of the same phenomena) is cheap to run but expensive to set up. In other words, while a data-driven simulation is parsimonious in time, a theory-driven simulation is parsimonious in space. (Good computational models satisfy a dual version of Occam’s Razor. They are heavy in size, with millions of parameters, but light to run.)

Some people try mix the two philosophies, inject our causal models into the machines and enjoy the best of both worlds. I believe that this approach is fundamentally mistaken, even if it proves to be fruitful in the short-run. Rather than biasing the machines with our theories, we should just ask them to economize their own thought processes and thereby come up with their own internal causal models and theories. After all, abstraction is just a form of compression, and when we talk about causality we (in practice) mean causality as it fits into the human brain. In the actual universe, everything is completely interlinked with everything else, and causality diagrams are unfathomably complicated. Hence, we should be wary of pre-imposing our theories on machines whose intuitive powers will soon surpass ours.

Remember that, in biological evolution, the development of unconscious (intuitive) thought processes came before the development of conscious (rational) thought processes. It should be no different for the digital evolution.

Side Note: We suffered an AI winter for mistakenly trying to flip this order and asking machines to develop rational capabilities before developing intuitional capabilities. When a scientist comes up with hypothesis, it is a simple effable distillation of an unconscious intuition which is of ineffable, complex statistical form. In other words, it is always “statistics first”. Sometimes the progression from the statistical to the causal takes place out in the open among a community of scientists (as happened in the smoking-causes-cancer research), but more often it just takes place inside the mind of a single scientist.


Continuing Role of the Scientist

Mohammed AlQuraishi, a researcher who studies protein folding, wrote an essay exploring a recent development in his field: the creation of a machine-learning model that can predict protein folds far more accurately than human researchers. AlQuiraishi found himself lamenting the loss of theory over data, even as he sought to reconcile himself to it. “There’s far less prestige associated with conceptual papers or papers that provide some new analytical insight,” he said, in an interview. As machines make discovery faster, people may come to see theoreticians as extraneous, superfluous, and hopelessly behind the times. Knowledge about a particular area will be less treasured than expertise in the creation of machine-learning models that produce answers on that subject.

Jonathan Zittrain - The Hidden Costs of Automated Thinking

The role of scientists in the data-driven paradigm will obviously be different but not trivial. Today’s world-champions in chess are computer-human hybrids. We should expect the situation for science to be no different. AI is complementary to human intelligence and in some sense only amplifies the already existing IQ differences. After all, a machine-learning model is only as good as the intelligence of its creator.

He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may cast.

- Leonardo da Vinci

Artificial intelligence (at least in its today’s form) is like a baby. Either it can be spoon-fed data or it gorges on everything. But, as we know, what makes great minds great is what they choose not to consume. This is where the scientists come in.

Deciding what experiments to conduct, what data sets to use are no trivial tasks. Choosing which portion of reality to “virtualize” is an important judgment call. Hence all data efforts are inevitably hypothesis-laden and therefore non-trivially involve the scientist.

For 30 years quantitative investing started with a hypothesis, says a quant investor. Investors would test it against historical data and make a judgment as to whether it would continue to be useful. Now the order has been reversed. “We start with the data and look for a hypothesis,” he says.

Humans are not out of the picture entirely. Their role is to pick and choose which data to feed into the machine. “You have to tell the algorithm what data to look at,” says the same investor. “If you apply a machine-learning algorithm to too large a dataset often it tends to revert to a very simple strategy, like momentum.”

The Economist - March of the Machines

True, each data generation effort is hypothesis-laden and each scientist comes with a unique set of biases generating a unique set of judgment calls, but at the level of the society, these biases get eventually washed out through (structured) randomization via sociological mechanisms and historical contingencies. In other words, unlike the individual, the society as a whole operates in a non-hypothesis-laden fashion, and eventually figures out the right angle. The role (and the responsibility) of the scientist (and the scientific institutions) is to cut the length of this search period as short as possible by simply being smart about it, in a fashion that is not too different from how enzymes speed up chemical reactions by lowering activation energy costs. (A scientist’s biases are actually his strengths since they implicitly contain lessons from eons of evolutionary learning. See the side note below.)

Side Note: There is this huge misunderstanding that evolution progresses via chance alone. Pure randomization is a sign of zero learning. Evolution on the other hand learns over time and embeds this knowledge in all complexity levels, ranging all the way from genetic to cultural forms. As the evolutionary entities become more complex, the search becomes smarter and the progress becomes faster. (This is how protein synthesis and folding happen incredibly fast within cells.) Only at the very beginning, in its most simplest form, does evolution try out everything blindly. (Physics is so successful because its entities are so stupid and comparatively much easier to model.) In other words, the commonly raised argument against the possibility of evolution achieving so much based on pure chance alone is correct. As mathematician Gregory Chaitin points out, “real evolution is not at all ergodic, since the space of all possible designs is much too immense for exhaustive search”.

Another venue where the scientists keep playing an important role is in transferring knowledge from one domain to another. Remember that there are two ways of solving hard problems: Diving into the vertical (technical) depths and venturing across horizontal (analogical) spaces. Machines are horrible at venturing horizontally precisely because they do not get to the gist of things. (This was the criticism of Noam Chomsky quoted above.)

Deep learning is kind of a turbocharged version of memorization. If you can memorize all that you need to know, that’s fine. But if you need to generalize to unusual circumstances, it’s not very good. Our view is that a lot of the field is selling a single hammer as if everything around it is a nail. People are trying to take deep learning, which is a perfectly fine tool, and use it for everything, which is perfectly inappropriate.

- Gary Marcus as quoted in Warning of an AI Winter


Trends Come and Go

Generally speaking, there is always a greater appetite for digging deeper for data when there is a dearth of ideas. (Extraction becomes more expensive as you dig deeper, as in mining operations.) Hence, the current trend of data-driven science is partially due to the fact that scientists themselves have ran out of sensible falsifiable hypotheses. Once the hypothesis space becomes rich again, the pendulum will inevitably swing back. (Of course, who will be doing the exploration is another question. Perhaps it will be the machines, and we will be doing the dirty work of data collection for them.)

As mentioned before, data-driven science operates stochastically in a serendipitous fashion and hypothesis-driven science operates deterministically in a directed fashion. Nature on the other hand loves to use both stochasticity and determinism together, since optimal dynamics reside - as usual - somewhere in the middle. (That is why there are tons of natural examples of structured randomnesses such as Levy Flights etc.) Hence we should learn to appreciate the complementarity between data-drivenness and hypothesis-drivenness, and embrace the duality as a whole rather than trying to break it.


If you liked this post, you will also enjoy the older post Genius vs Wisdom where genius and wisdom are framed respectively as hypothesis-driven and data-driven concepts.

waves of decentralizations

Evolutionary dynamics always start off well-defined and centralized, but overtime (without any exception) mature and decentralize. Our own history is full of beautiful exemplifications of this fact. In historical order, we went through the following decentralization waves:

  • Science decentralized Truth away from the hegemony of Church.

  • Democracy decentralized Power.

  • Capitalism decentralized Wealth.

  • Social Media decentralized Fame away from the media barons.

Today, if you are not powerful, wealthy or famous, there is no one but to blame yourself. If you do not know the truth, there is not one but to blame yourself. Everything is accessible, at least in theory. This of course inflicts an immense amount of stress on the modern citizen. In a sense, life was a lot easier when there was not so much decentralization.

Note how important social media revolution really was. Most people do not recognize the magnitude of change that has taken place in such a short period of time. In terms of structural importance, it is on the same scale as the emergence of democracy. We no longer distinguish a sophisticated judgment from an unsophisticated one. Along with “Every Vote Counts”, now we also have “Every Like Counts”.

Of course, the social media wave was built on another, even more fundamental decentralization wave, which is the internet itself. Together with the rise of internet, communication became completely decentralized. Today, in a similar fashion, we are witnessing the emergence of blockchain technology which is trying to decentralize trust by creating neutral trust nodes with no centralized authority behind them. For instance, you no longer need to be a central bank with a stamp of approval from the government to a launch a currency. (Both internet and blockchain undermine political authority and in particular render national boundaries increasingly more irrelevant.)

Internet itself is an example of a design, where robustness to communication problems was a primary consideration (for those who don't remember, Arpanet was designed by DARPA to be a communication network resistant to nuclear attack). In that sense the Internet is extremely robust. But today we are being introduced to many other instances of that technology, many of which do not follow the decentralized principles that guided the early Internet, but are rather highly concentrated and centralized. Centralized solutions are almost by definition fragile, since they depend on the health of a single concentrated entity. No matter how well protected such central entity is, there are always ways for it to be hacked or destroyed.

Filip Piekniewski - Optimality, Technology and Fragility

As pointed out by Filip, evolution favors progression from centralization to decentralization because it functionally corresponds to a progression from fragility to robustness.

Also, notice that all of these decentralization waves initially overshoot due to the excitement caused by their novelty. That is why they are always criticized at first for good reasons. Eventually they all shed off their lawlessness, structurally stabilize, go completely mainstream and institutionalize themselves.

science vs technology

  • Science (as a form of understanding) gets better as it zooms out. Technology (as a form of service) gets better as it zooms in. Science progresses through unifications and technology progresses through diversifications.

  • Both science and technology progress like a jellyfish moves through the water, via alternating movements of contractions (i.e. unifications) and relaxations (i.e. diversifications). So neither science or technology can be pictured as a simple linear trend of unification or diversification. Technology goes through waves of standardizations for the sake of achieving efficiency and de-standardizations for the sake of achieving a better fit. Progress happens due to the fact that each new wave of de-standardization (magically) achieving a better fit than the previous wave, thanks to an intermittent period of standardization. Opposite happens in science, where each new wave of unification (magically) reaches a higher level of accuracy than the previous wave, thanks to an intermittent period of diversification.

  • Unification is easier to achieve in a single mind. Diversification is easier to achieve among many minds. That is why the scientific world is permeated by the lone genius culture and the technology world is permeated by the tribal team-work culture. Scientists love their offices, technologists love their hubs.

“New scientific ideas never spring from a communal body, however organised, but rather from the head of an individually inspired researcher who struggles with his problems in lonely thought and unites all his thought on one single point which is his whole world for the moment.”
- Max Planck

  • Being the originator of widely adopted scientific knowledge makes the originator powerful, while being the owner of privately kept technological knowledge makes the owner powerful. Hence, the best specimens of unifications quickly get diffused out of the confined boundaries of a single mind, and the best specimens of diversifications quickly get confined from the diffused atmosphere of many minds.

  • Unifiers, standardizers tend to be more masculine types who do not mind being alone. Diversifiers, de-standardizers tend to be more feminine types who can not bear being alone. That is why successful technology leaders are more feminine than the average successful leader in the business world, and successful scientific leaders are more masculine than the average successful leader in the academic world. Generally speaking, masculine types suffer more discrimination in the technology world and feminine types suffer more discrimination in the scientific world.

  • Although unifiers play a more important role in science, we usually give the most prestigious awards to the diversifiers who deployed the new tools invented by the unifiers at tangible famous problems. Although diversifiers play a more important role in technology, we usually remember and acknowledge only the unifiers who crystallized the vast efforts of diversifiers into tangible popular formats.

  • Technological challenges lie in efficient specializations. Scientific challenges lie in efficient generalizations. You need to learn vertically and increase your depth to come up with better specializations. This involves learning-to-learn-new, meaning that what you will learn next will be built on what you learned before. You need to learn horizontally and increase your range to come up with better generalizations. This involves learning-to-relearn-old, meaning that what you learned before will be recast in the light of what you will learn next.

  • Technology and design are forms of service. Science and art are forms of understanding. That is why the intersection of technology and art, as well as the intersection of science and design, is full of short-lived garbage. While all our “external” problems can be tracked back to a missing tool (technological artifact) or a wrong design, all our “internal” problems can be traced back to a missing truth (scientific fact) or wrong aesthetics (i.e. wrong ways of looking at the world).

  • Scientific progress contracts the creative space of religion by outright disproval of certain ideas and increases the expressive power of religion by supplying it with new vocabularies. (Note that the metaphysical part of religion can be conceived as “ontology design”.) Technological progress contracts the creative space of art by outright trivialization of certain formats and increases the expressive power of art by supplying it with new tools. (Think of the invention of photography rendering realistic painting meaningless and the invention of synthesizers leading to new types of music.) In other words, science and technology aid respectively religion and art to discover their inner cores by both limiting the domain of exploration and increasing the efficacy of exploration. (Notice that artists and theologians are on the same side of the equation. We often forget this, but as Joseph Campbell reminds us, contemporary art plays an important role in updating our mythologies, and keeping the mysteries alive.)

  • Scientific progress replaces mysteries with more profound mysteries. Technological progress replaces problems with more complex problems.

  • Both science and technology progress through hype cycles, science through how much phenomena the brand new idea can explain, technology through how many problems the brand new tool can solve.

  • Scientific progress slows down when money is thrown at ideas rather than people. Technological progress slows down when money is thrown at people rather than ideas.

  • Science progresses much faster during peacetime, technology progresses much faster during wartime. Scientific breakthroughs often precede new wars, technological breakthroughs often end ongoing wars.

appeal of the outrageous

We should perhaps also add to this list of criteria the response from the famous mathematician John Conway to the question of what makes a great conjecture: “It should be outrageous.” An appealing conjecture is also somewhat ridiculous or fantastic, with unforeseen range and consequences. Ideally it combines components from distant domains that haven’t met before in a single statement, like the surprising ingredients of a signature dish.

Robbert Dijkgraaf - The Subtle Art of the Mathematical Conjecture

We are used to click-bait new with outrageous titles that incite your curiosity. This may look like a one-off ugly phenomenon, but it is not. As consumers of information, we display the same behavior everywhere. This is forcing even scientists to produce counter-intuitive papers with outrageous titles so that they can attract the attention of the press. (No wonder why most published research is false!)

Generally speaking, people do not immediately recognize the importance of an emerging matter. Even in mathematics, you need to induce a shock to spur activity and convince others join you in the exploration of a new idea.

In 1872, Karl Weierstrass astounded the mathematical world by giving an example of a function that is continuous at every point but whose derivative does not exist anywhere. Such a function defied geometric intuition about curves and tangent lines, and consequently spurred much deeper investigations into the concepts of real analysis.

Robert G. Bartle &‎ Donald R. Sherbert - Introduction to Real Analysis (Page 163)

Similar to the above example, differential topology became a subject on its own and attracted a lot of attention only after John Milnor shocked the world by showing that 7 dimensional sphere admits exactly 28 different oriented diffeomorphism classes of differentiable structures. (Why 28, right? It actually marks the beginning of one of the most amazing number sequences in mathematics.)

reality and analytical inquiry

What is real and out there? This question is surprisingly hard to answer.

The only way we seem to be able to define ontology is as shared epistemology. (Every other definition suffers from an immediate deficiency.) In other words, what is real is what every possible point of view agrees upon, and vice versa. There is no such thing as your reality. (Note that this definition breaks the duality between ontology and epistemology. The moment you make inferences about the former, it gets subsumed by the latter. Is this surprising? Epistemology is all about making inferences. In other words, the scientific method itself is what is breaking the duality.)

Now we have a big problem: Ontological changes can not be communicated to all points of view at the same time in an instantaneous manner. This is outlawed by the finiteness of the speed of the fastest causation propagator which is usually taken as light. In fact, according to our current understanding of physics, there seems to be nothing invariant across all points of view. (e.g. Firewall paradox, twin paradox etc.) Whenever we get our hands onto some property, it slips away with the next advance in our theories.

This is a weird situation, an absolute mind-fuck to be honest. If we endorse all points of views, we can define ontology but then nothing seems to be real. If we endorse only our point of view, we can not define ontology at all and get trapped in a solipsistic world where every other point of view becomes unreal and other people turn into zombies.

Could all different points of views be part of a single - for lack of a better term “God” - point of view? In this case, our own individual point of view becomes unreal. This is a bad sacrifice indeed, but could it help us salvage reality? Nope… Can the universe observe itself? The question does not even make any sense!

It seems like physics can not start off without assuming a solipsistic worldview, adopting a single coordinate system which can not be sure about the reality of other coordinate systems.

In an older blog post, I had explained how dualities emerge as byproducts of analytical inquiry and thereby artificially split the unity of reality. Here we have a similar situation. The scientific method (i.e. analytical inquiry) is automatically giving rise to solipsism and thereby artificially splitting the unity of reality into considerations from different points of views.

In fact, the notions of duality and solipsism are very related. To see why, let us assume that we have a duality between P and not-P. Then

  • Within a single point of view, nothing can satisfy both P and not-P.

  • No property P stays invariant across all points of views.

Here, the first statement is a logical necessity and the second statement is enforced upon us by our own theories. We will take the second statement as the definition of solipsism.

Equivalently, we could have said

  • If property P holds from the point of view of A and not-P holds from the point of view of B, then A can not be equal to B.

  • For every property P, there exists at least one pair (A,B) such that A is not equal to B and P holds from the point of view of A while not-P holds from the point of view of B.

Now let X be the set of pairs (A,B) such that P holds from the point of view of A and not-P holds from the point of view of B. Also let △ stand for the diagonal set consisting of pairs (A,A). Then the above statements become

  • X can not hit △.

  • X can not miss the complement of △.

Using just mathematical notation we have

  • X ∩ △ = ∅

  • X ∩ △’ ≠ ∅

In other words, dualities and solipsism are defined using the same ingredients! Analytical inquiry gives rise to both at the same time. It supplies you labels to attach to reality (via the above equality) but simultaneously takes the reality away from you (via the above inequality). Good deal, right? After all (only) nothing comes for free!

Phenomena are the things which are empty of inherent existence, and inherent existence is that of which phenomena are empty.

Jeffrey Hopkins - Meditations on Emptiness (Page 9)


Recall that at the beginning post we had defined ontology as shared epistemology. One can also go the other way around and define epistemology as shared ontology. What does this mean?

  • To say that some thing exists we need every mind to agree to it.

  • To say that some statement is true we need every body to obey to it.

This is actually how truth is defined in model theory. A statement is deemed true if only if it holds in every possible embodiment.

In this sense, epistemology-ontology duality mirrors mind-body duality. (For a mind, the reality consists of bodies and what is fundamental is existence. For a body, the reality consist of minds and what is fundamental is truth.) For thousands of years, Western philosophy has been trying to break this duality which has popped up in various forms. Today, for instance, physicists are still debating whether “it” arouse from “bit” or “bit” arose from “it”.

Let us now do another exercise. What is the epistemological counterpart of the ontological statement that there are no invariances in physics?

  • Ontology. There is no single thing that every mind agrees to.

  • Epistemology. There is no single statement that every body obeys to.

Sounds outrageous, right? How come there be no statement that is universally true? Truth is absolute in logic, but relative in physics. We are not allowed to make any universal statements in physics, no matter how trivial.

necessity of dualities

All truths lie between two opposite positions. All dramas unfold between two opposing forces. Dualities are both ubiquitous and fundamental. They shape both our mental and physical worlds.

Here are some examples:

Mental

objective | subjective
rational | emotional
conscious | unconscious
reductive | inductive
absolute | relative
positive | negative
good | evil
beautiful | ugly
masculine | feminine


Physical

deterministic | indeterministic
continuous | discrete
actual | potential
necessary | contingent
inside | outside
infinite | finite
global | local
stable | unstable
reversible | irreversible

Notice that even the above split between the two groups itself is an example of duality.

These dualities arise as an epistemological byproduct of the method of analytical inquiry. That is why they are so thoroughly infused into the languages we use to describe the world around us.

Each relatum constitutive of dipolar conceptual pairs is always contextualized by both the other relatum and the relation as a whole, such that neither the relata (the parts) nor the relation (the whole) can be adequately or meaningfully defined apart from their mutual reference. It is impossible, therefore, to conceptualize one principle in a dipolar pair in abstraction from its counterpart principle. Neither principle can be conceived as "more fundamental than," or "wholly derivative of" the other.

Mutually implicative fundamental principles always find their exemplification in both the conceptual and physical features of experience. One cannot, for example, define either positive or negative numbers apart from their mutual implication; nor can one characterize either pole of a magnet without necessary reference to both its counterpart and the two poles in relation - i.e. the magnet itself. Without this double reference, neither the definiendum nor the definiens relative to the definition of either pole can adequately signify its meaning; neither pole can be understood in complete abstraction from the other.

- Epperson & Zafiris - Foundations of Relational Realism (Page 4)


Various lines of Eastern religious and philosophical thinkers intuited how languages can hide underlying unity by artificially superimposing conceptual dualities (the primary of which is the almighty object-subject duality) and posited the nondual wholesomeness of nature several thousand years before the advent of quantum mechanics. (The analytical route to enlightenment is always longer than the intuitive route.)

Western philosophy on the other hand

  • ignored the mutually implicative nature of all dualities and denied the inaccessibility of wholesomeness of nature to analytical inquiry.

  • got fooled by the precision of mathematics which is after all just another language invented by human beings.

  • confused partial control with understanding and engineering success with ontological precision. (Understanding is a binary parameter, meaning that either you understand something or you do not. Control on the other hand is a continuous parameter, meaning that you can have partial control over something.)

As a result Western philosophers mistook representation as reality and tried to confine truth to one end of each dualism in order to create a unity of representation matching the unity of reality.

Side Note: Hegel was an exception. Like Buddha, he too saw dualities as artificial byproducts of analysis, but unlike him, he suggested that one should transcend them via synthesis. In other words, for Buddha unity resided below and for Hegel unity resided above. (Buddha wanted to peel away complexity to its simplest core, while Hegel wanted to embrace complexity in its entirety.) While Buddha stopped theorizing and started meditating instead, Hegel saw the salvation through higher levels of abstraction via alternating chains of analyses and syntheses. (Buddha wanted to turn off cognition altogether, while Hegel wanted to turn up cognition full-blast.) Perhaps at the end of the day they were both preaching the same thing. After all, at the highest level of abstraction, thinking probably halts and emptiness reigns.

It was first the social thinkers who woke up and revolted against the grand narratives built on such discriminative pursuits of unity. There was just way too much politically and ethically at stake for them. The result was an overreaction, replacing unity with multiplicity and considering all points of views as valid. In other words, the pendulum swung the other way and Western philosophy jumped from one state of deep confusion into another. In fact, this time around the situation was even worse since there was an accompanying deep sense of insecurity as well.

The cacophony spread into hard sciences like physics too. Grand narrations got abandoned in favor of instrumental pragmatism. Generations of new physicists got raised as technicians who basically had no clue about the foundations of their disciplines. The most prominent of them could even publicly make an incredibly naive claim such as “something can spontaneously arise from nothing through a quantum fluctuation” and position it as a non-philosophical and non-religious alternative to existing creation myths.

Just to be clear, I am not trying to argue here in favor of Eastern holistic philosophies over Western analytic philosophies. I am just saying that the analytic approach necessitates us to embrace dualities as two-sided entities, including the duality between holistic and analytic approaches.


Politics experienced a similar swing from conservatism (which hailed unity) towards liberalism (which hailed multiplicity). During this transition, all dualities and boundaries got dissolved in the name of more inclusion and equality. The everlasting dynamism (and the subsequent wisdom) of dipolar conceptual pairs (think of magnetic poles) got killed off in favor of an unsustainable burst in the number of ontologies.

Ironically, liberalism resulted in more sameness in the long run. For instance, the traditional assignment of roles and division of tasks between father and mother got replaced by equal parenting principles applied by genderless parents. Of course, upon the dissolution of the gender dipolarity, the number of parents one can have became flexible as well. Having one parent became as natural as having two, three or four. In other words, parenting became a community affair in its truest sense.

 
Duality.png
 

The even greater irony was that liberalism itself forgot that it represented one extreme end of another duality. It was in a sense a self-defeating doctrine that aimed to destroy all discriminative pursuits of unity except for that of itself. (The only way to “resolve” this paradox is to introduce a conceptual hierarchy among dualities where the higher ones can be used to destroy the lower ones, in a fashion that is similar to how mathematicians deal with Russell’s paradox in set theory.)


Of course, at some point the pendulum will swing back to pursuit of unity again. But while we swing back and forth between unity and multiplicity, we keep skipping the only sources of representational truths, namely the dualities themselves. For some reason we are extremely uncomfortable with the fact that the world can only be represented via mutually implicative principles. We find “one” and “infinity” tolerable but “two” arbitrary and therefore abhorring. (Prevalence of “two” in mathematics and “three” in physics was mentioned in a previous blog post.)

I am personally obsessed with “two”. I look out for dualities everywhere and share the interesting finds here on my blog. In fact, I go even further and try to build my entire life on dualities whose two ends mutually enhance each other every time I visit them.

We should not collapse dualities into unities for the sake of satisfying our sense of belonging. We need to counteract this dangerous sociological tendency using our common sense at the individual level. Choosing one side and joining the groupthink is the easy way out. We should instead strive to carve out our identities by consciously sampling from both sides. In other words, when it comes to complex matters, we should embrace the dualities as a whole and not let them split us apart. (Remember, if something works very well, its dual should also work very well. However, if something is true, its dual has to be wrong. This is exactly what separates theory from reality.)

Of course, it is easy to talk about these matters, but who said that pursuit of truth would be easy?

Perhaps there is no pursuit to speak of unless one is pre-committed to choose a side, and swinging back and forth between the two ends of a dualism is the only way nature can maintain its neutrality without sacrificing its dynamicity? (After all, there is no current without a polarity in the first place.)

Perhaps we should just model our logic after reality (like Hegel wanted to) and rather than expect reality to conform to our logic? (In this way we can have our cake and eat it too!)

states vs processes

We think of all dynamical situations as consisting of a space of states and a set of laws codifying how these states are weaved across time, and refer to the actual manifestation of these laws as processes.

Of course, one can argue whether it is sensical to split the reality into states and processes but so far it has been very fruitful to do so.


1. Interchangeability

1.1. Simplicity as Interchangeability of States and Processes

In mathematics, structures (i.e. persisting states) tend to be exactly whatever are preserved by transformations (i.e. processes). That is why Category Theory works, why you can study processes in lieu of states without losing information. (Think of continuous maps vs topological spaces) State and process centric perspectives each have their own practical benefits, but they are completely interchangeable in the sense that both Set Theory (state centric perspective) and Category Theory (process centric perspective) can be taken as the foundation of all of mathematics.

Physics is similar to mathematics. Studying laws is basically the same thing as studying properties. Properties are whatever are preserved by laws and can also be seen as whatever give rise to laws. (Think of electric charge vs electrodynamics) This observation may sound deep, but (as with any deep observation) is actually tautologous since we can study only what does not change through time and only what does not change through time allows us to study time itself. (Study of time is equivalent to study of laws.)

Couple of side-notes:

  • There are no intrinsic (as opposed to extrinsic) properties in physics since physics is an experimental subject and all experiments involve an interaction. (Even mass is an extrinsic property, manifesting itself only dynamically.) Now here is the question that gets to the heart of the above discussion: If there exists only extrinsic properties and nothing else, then what holds these properties? Nothing! This is basically the essence of Radical Ontic Structural Realism and exactly why states and processes are interchangeable in physics. There is no scaffolding.

  • You probably heard about the vast efforts and resources being poured into the validation of certain conjectural particles. Gauge theory tells us that the search for new particles is basically the same thing as the search for new symmetries which are of course nothing but processes.

  • Choi–Jamiołkowski isomorphism helps us translate between quantum states and quantum processes.

Long story short, at the foundational level, states and processes are two sides of the same coin.


1.2. Complexity as Non-Interchangeability of States and Processes

You understand that you are facing complexity exactly when you end up having to study the states themselves along with the processes. In other words, in complex subjects, the interchangeability of state and process centric perspectives start to no longer make any practical sense. (That is why stating a problem in the right manner matters a lot in complex subjects. Right statement is half the solution.)

For instance, in biology, bioinformatics studies states and computational biology studies processes. (Beware that the nomenclature in biology literature has not stabilized yet.) Similarly, in computer science, study of databases (i.e. states) and programs (i.e. processes) are completely different subjects. (You can view programs themselves as databases and study how to generate new programs out of programs. But then you are simply operating in one higher dimension. Philosophy does not change.)

There is actually a deep relation between biology and computer science (similar to the one between physics and mathematics) which was discussed in an older blog post.


2. Persistence

The search for signs of persistence can be seen as the fundamental goal of science. There are two extreme views in metaphysics on this subject:

  • Heraclitus says that the only thing that persists is change. (i.e. Time is real, space is not.)

  • Parmenides says that change is illusionary and that there is just one absolute static unity. (i.e. Space is real, time is not.)

The duality of these points of views were most eloquently pointed out by the physicist John Wheeler, who said "Explain time? Not without explaining existence. Explain existence? Not without explaining time".

Persistences are very important because they generate other persistencies. In other words, they are the building blocks of our reality. For instance, states in biology are complex simply because biology strives to resist change by building persistence upon persistence.


2.1. Invariances as State-Persistences

From a state perspective, the basic building blocks are invariances, namely whatever that do not change across processes.

Study of change involves an initial stage where we give names to substates. Then we observe how these substates change with respect to time. If a substate changes to the point where it no longer fits the definition of being A, we say that substate (i.e. object) A failed to survive. In this sense, study of survival is a subset of study of change. The only reason why they are not the same thing is because our definitions themselves are often imprecise. (From one moment to the next, we say that the river has survived although its constituents have changed etc.)

Of course, the ambiguity here is on purpose. Otherwise without any definiens, you do not have an academic field to speak of. In physics for instance, the definitions are extremely precise, and the study of survival and the study of change completely overlap. In a complex subject like biology, states are so rich that the definitions have to be ambiguous. (You can only simulate the biological states in a formal language, not state a particular biological state. Hence the reason why computer science is a better fit for biology than mathematics.)


2.2. Cycles as Process-Persistences

Processes become state-like when they enter into cyclic behavior. That is why recurrence is so prevalent in science, especially in biology.

As an anticipatory affair, biology prefers regularities and predictabilities. Cycles are very reliable in this sense: They can be built on top of each other, and harnessed to record information about the past and to carry information to the future. (Even behaviorally we exploit this fact: It is easier to construct new habits by attaching them to old habits.) Life, in its essence, is just a perpetuation of a network of interacting ecological and chemical cycles, all of which can be traced back to the grand astronomical cycles.

Prior studies have reported that 15% of expressed genes show a circadian expression pattern in association with a specific function. A series of experimental and computational studies of gene expression in various murine tissues has led us to a different conclusion. By applying a new analysis strategy and a number of alternative algorithms, we identify baseline oscillation in almost 100% of all genes. While the phase and amplitude of oscillation vary between different tissues, circadian oscillation remains a fundamental property of every gene. Reanalysis of previously published data also reveals a greater number of oscillating genes than was previously reported. This suggests that circadian oscillation is a universal property of all mammalian genes, although phase and amplitude of oscillation are tissue-specific and remain associated with a gene’s function. (Source)

A cyclic process traces out what is called an orbital which are like invariances that are smeared across time. An invariance is a substate preserved by a process, namely a portion of a state that is mapped identically to itself. An orbital too is mapped to itself by the cyclic process, but it is not identically done so. (Each orbital point moves forward in time to another orbital point and eventually ends up at its initial position.) Hence orbitals and process-persistency can be viewed respectively as generalizations of invariances and state-persistency.


3. Information

In practice, we do not have perfect knowledge of the states nor the processes. Since we can not move both feet at the same time, in our quest to understand nature, we assume that we have perfect knowledge of either the states or the processes.

  • Assumption: Perfect knowledge of all the actual processes but imperfect knowledge of the state
    Goal: Dissect the state into explainable and unexplainable parts
    Expectation: State is expected to be partially unexplainable due to experimental constraints on measuring states.

  • Assumption: Perfect knowledge of a state but no knowledge of the actual processes
    Goal: Find the actual (minimal) process that generated the state from the library of all possible processes.
    Expectation: State is expected to be completely explainable due to perfect knowledge about the state and the unbounded freedom in finding the generating process.

The reason why I highlighted expectations here is because it is quite interesting how our psychological stance against the unexplainable (which is almost always - in our typical dismissive tone - referred to as noise) differs in each case.

  • In the presence of perfect knowledge about the processes, we interpret the noisy parts of states as absence of information.

  • In the absence of perfect knowledge about the processes, we interpret the noisy parts of states as presence of information.

The flip side of the above statements is that, in our quest to understand nature, we use the word information in two opposite senses.

  • Information is what is explainable.

  • Information is what is inexplainable.


3.1 Information as the Explainable

In this case, noise is the ideal left-over product after everything else is explained away, and is considered normal and expected. (We even gave the name “normal” to the most commonly encountered noise distribution.)

This point of view is statistical and is best exemplified by the field of statistical mechanics where massive micro-degrees freedom can be safely ignored due to their random nature and canned into highly regular noise distributions.


3.2. Information as the Inexplainable

In this case, noise is the only thing that can not be compressed further or explained away. It is surprising and unnerving. In computer speak, one would say “It is not a bug, it is a feature.”

This point of view is algorithmic and is best exemplified by the field of algorithmic complexity which looks at the notion of complexity from a process centric perspective.