states vs processes

We think of all dynamical situations as consisting of a space of states and a set of laws codifying how these states are weaved across time, and refer to the actual manifestation of these laws as processes.

Of course, one can argue whether it is sensical to split the reality into states and processes but so far it has been very fruitful to do so.


1. Interchangeability

1.1. Simplicity as Interchangeability of States and Processes

In mathematics, structures (i.e. persisting states) tend to be exactly whatever are preserved by transformations (i.e. processes). That is why Category Theory works, why you can study processes in lieu of states without losing information. (Think of continuous maps vs topological spaces) State and process centric perspectives each have their own practical benefits, but they are completely interchangeable in the sense that both Set Theory (state centric perspective) and Category Theory (process centric perspective) can be taken as the foundation of all of mathematics.

Physics is similar to mathematics. Studying laws is basically the same thing as studying properties. Properties are whatever are preserved by laws and can also be seen as whatever give rise to laws. (Think of electric charge vs electrodynamics) This observation may sound deep, but (as with any deep observation) is actually tautologous since we can study only what does not change through time and only what does not change through time allows us to study time itself. (Study of time is equivalent to study of laws.)

Couple of side-notes:

  • There are no intrinsic (as opposed to extrinsic) properties in physics since physics is an experimental subject and all experiments involve an interaction. (Even mass is an extrinsic property, manifesting itself only dynamically.) Now here is the question that gets to the heart of the above discussion: If there exists only extrinsic properties and nothing else, then what holds these properties? Nothing! This is basically the essence of Radical Ontic Structural Realism and exactly why states and processes are interchangeable in physics. There is no scaffolding.

  • You probably heard about the vast efforts and resources being poured into the validation of certain conjectural particles. Gauge theory tells us that the search for new particles is basically the same thing as the search for new symmetries which are of course nothing but processes.

  • Choi–Jamiołkowski isomorphism helps us translate between quantum states and quantum processes.

Long story short, at the foundational level, states and processes are two sides of the same coin.


1.2. Complexity as Non-Interchangeability of States and Processes

You understand that you are facing complexity exactly when you end up having to study the states themselves along with the processes. In other words, in complex subjects, the interchangeability of state and process centric perspectives start to no longer make any practical sense. (That is why stating a problem in the right manner matters a lot in complex subjects. Right statement is half the solution.)

For instance, in biology, bioinformatics studies states and computational biology studies processes. (Beware that the nomenclature in biology literature has not stabilized yet.) Similarly, in computer science, study of databases (i.e. states) and programs (i.e. processes) are completely different subjects. (You can view programs themselves as databases and study how to generate new programs out of programs. But then you are simply operating in one higher dimension. Philosophy does not change.)

There is actually a deep relation between biology and computer science (similar to the one between physics and mathematics) which was discussed in an older blog post.


2. Persistence

The search for signs of persistence can be seen as the fundamental goal of science. There are two extreme views in metaphysics on this subject:

  • Heraclitus says that the only thing that persists is change. (i.e. Time is real, space is not.)

  • Parmenides says that change is illusionary and that there is just one absolute static unity. (i.e. Space is real, time is not.)

The duality of these points of views were most eloquently pointed out by the physicist John Wheeler, who said "Explain time? Not without explaining existence. Explain existence? Not without explaining time".

Persistences are very important because they generate other persistencies. In other words, they are the building blocks of our reality. For instance, states in biology are complex simply because biology strives to resist change by building persistence upon persistence.


2.1. Invariances as State-Persistences

From a state perspective, the basic building blocks are invariances, namely whatever that do not change across processes.

Study of change involves an initial stage where we give names to substates. Then we observe how these substates change with respect to time. If a substate changes to the point where it no longer fits the definition of being A, we say that substate (i.e. object) A failed to survive. In this sense, study of survival is a subset of study of change. The only reason why they are not the same thing is because our definitions themselves are often imprecise. (From one moment to the next, we say that the river has survived although its constituents have changed etc.)

Of course, the ambiguity here is on purpose. Otherwise without any definiens, you do not have an academic field to speak of. In physics for instance, the definitions are extremely precise, and the study of survival and the study of change completely overlap. In a complex subject like biology, states are so rich that the definitions have to be ambiguous. (You can only simulate the biological states in a formal language, not state a particular biological state. Hence the reason why computer science is a better fit for biology than mathematics.)


2.2. Cycles as Process-Persistences

Processes become state-like when they enter into cyclic behavior. That is why recurrence is so prevalent in science, especially in biology.

As an anticipatory affair, biology prefers regularities and predictabilities. Cycles are very reliable in this sense: They can be built on top of each other, and harnessed to record information about the past and to carry information to the future. (Even behaviorally we exploit this fact: It is easier to construct new habits by attaching them to old habits.) Life, in its essence, is just a perpetuation of a network of interacting ecological and chemical cycles, all of which can be traced back to the grand astronomical cycles.

Prior studies have reported that 15% of expressed genes show a circadian expression pattern in association with a specific function. A series of experimental and computational studies of gene expression in various murine tissues has led us to a different conclusion. By applying a new analysis strategy and a number of alternative algorithms, we identify baseline oscillation in almost 100% of all genes. While the phase and amplitude of oscillation vary between different tissues, circadian oscillation remains a fundamental property of every gene. Reanalysis of previously published data also reveals a greater number of oscillating genes than was previously reported. This suggests that circadian oscillation is a universal property of all mammalian genes, although phase and amplitude of oscillation are tissue-specific and remain associated with a gene’s function. (Source)

A cyclic process traces out what is called an orbital which are like invariances that are smeared across time. An invariance is a substate preserved by a process, namely a portion of a state that is mapped identically to itself. An orbital too is mapped to itself by the cyclic process, but it is not identically done so. (Each orbital point moves forward in time to another orbital point and eventually ends up at its initial position.) Hence orbitals and process-persistency can be viewed respectively as generalizations of invariances and state-persistency.


3. Information

In practice, we do not have perfect knowledge of the states nor the processes. Since we can not move both feet at the same time, in our quest to understand nature, we assume that we have perfect knowledge of either the states or the processes.

  • Assumption: Perfect knowledge of all the actual processes but imperfect knowledge of the state
    Goal: Dissect the state into explainable and unexplainable parts
    Expectation: State is expected to be partially unexplainable due to experimental constraints on measuring states.

  • Assumption: Perfect knowledge of a state but no knowledge of the actual processes
    Goal: Find the actual (minimal) process that generated the state from the library of all possible processes.
    Expectation: State is expected to be completely explainable due to perfect knowledge about the state and the unbounded freedom in finding the generating process.

The reason why I highlighted expectations here is because it is quite interesting how our psychological stance against the unexplainable (which is almost always - in our typical dismissive tone - referred to as noise) differs in each case.

  • In the presence of perfect knowledge about the processes, we interpret the noisy parts of states as absence of information.

  • In the absence of perfect knowledge about the processes, we interpret the noisy parts of states as presence of information.

The flip side of the above statements is that, in our quest to understand nature, we use the word information in two opposite senses.

  • Information is what is explainable.

  • Information is what is inexplainable.


3.1 Information as the Explainable

In this case, noise is the ideal left-over product after everything else is explained away, and is considered normal and expected. (We even gave the name “normal” to the most commonly encountered noise distribution.)

This point of view is statistical and is best exemplified by the field of statistical mechanics where massive micro-degrees freedom can be safely ignored due to their random nature and canned into highly regular noise distributions.


3.2. Information as the Inexplainable

In this case, noise is the only thing that can not be compressed further or explained away. It is surprising and unnerving. In computer speak, one would say “It is not a bug, it is a feature.”

This point of view is algorithmic and is best exemplified by the field of algorithmic complexity which looks at the notion of complexity from a process centric perspective.

thoughts on abstraction

Why is it always the case that formulation of deeper physics require more abstract mathematics? Why does understanding get better as it zooms out?

Side Note: Notice that there are two ways of zooming out. First, you can abstract by ignoring details. This is actually great for applications, but not good for understanding. It operates more like chunking, coarse-graining, forming equivalence classes etc. You end up sacrificing accuracy for the sake of practicality. Second, you can abstract in the sense of finding an underlying structure that allows you to see two phenomena as different manifestations of the same phenomenon. This is actually the meaning that we will be using throughout the blogpost. While coarse graining is easy, discovering an underlying structure is hard. You need to understand the specificity of a phenomenon which you normally consider to be general.

For instance, a lot of people are unsatisfied with the current formulation of quantum physics, blaming it for being too instrumental. Yes, the math is powerful. Yes, the predictions turn out to be correct. But the mathematical machinery (function spaces etc.) feels alien, even after one gets used to it over time. Or compare the down-to-earth Feynman diagrams with the amplituhedron theory... Again, you have a case where a stronger and more abstract beast is posited to dethrone a multitude of earthlings.

Is the alienness a price we have to pay for digging deeper? The answer is unfortunately yes. But this should not be surprising at all:

  • We should not expect to be able to explain deeper physics (which is so removed from our daily lives) using basic mathematics inspired from mundane physical phenomena. Abstraction gives us the necessary elbow room to explore realities that are far-removed from our daily lives.

  • You can use the abstract to can explain the specific but you can not proceed the other way around. Hence as you understand more, you inevitably need to go higher up in abstraction. For instance, you may hope that a concept as simple as the notion of division algebra will be powerful enough to explain all of physics, but you will sooner or later be gravely disappointed. There is probably a deeper truth lurking behind such a concrete pattern.



Abstraction as Compression

The simplicities of natural laws arise through the complexities of the languages we use for their expression.

- Eugene Wigner

That the simplest theory is best, means that we should pick the smallest program that explains a given set of data. Furthermore, if the theory is the same size as the data, then it is useless, because there is always a theory that is the same size as the data that it explains. In other words, a theory must be a compression of the data, and the greater the compression, the better the theory. Explanations are compressions, comprehension is compression!

Chaitin - Metaphysics, Metamathematics and Metabiology

We can not encode more without going more abstract. This is a fundamental feature of the human brain. Either you have complex patterns based on basic math or you have simple patterns based on abstract math. In other words, complexity is either apparent or hidden, never gotten rid of. (i.e. There is no loss of information.) By replacing one source of cognitive strain (complexity) with another source of cognitive strain (abstraction), we can lift our analysis to higher-level complexities.

In this sense, progress in physics is destined to be of an unsatisfactory nature. Our theories will keep getting more abstract (and difficult) at each successive information compression. 

Don't think of this as a human tragedy though! Even machines will need abstract mathematics to understand deeper physics, because they too will be working under resource constraints. No matter how much more energy and resources you summon, the task of simulating a faithful copy of the universe will always require more.

As Bransford points out, people rarely remember written or spoken material word for word. When asked to reproduce it, they resort to paraphrase, which suggests that they were able to store the meaning of the material rather than making a verbatim copy of each sentence in the mind. We forget the surface structure, but retain the abstract relationships contained in the deep structure.

Jeremy Campbell - Grammatical Man (Page 219)

Depending on context, category theoretical techniques can yield proofs shorter than set theoretical techniques can, and vice versa. Hence, a machine that can sense when to switch between these two languages can probe the vast space of all true theories faster. Of course, you will need human aide (enhanced with machine learning algorithms) to discern which theories are interesting and which are not.

Abstraction is probably used by our minds as well, allowing it to decrease the number of used neurons without sacrificing explanatory power.

Rolnick and Max Tegmark of the Massachusetts Institute of Technology proved that by increasing depth and decreasing width, you can perform the same functions with exponentially fewer neurons. They showed that if the situation you’re modeling has 100 input variables, you can get the same reliability using either 2100 neurons in one layer or just 210 neurons spread over two layers. They found that there is power in taking small pieces and combining them at greater levels of abstraction instead of attempting to capture all levels of abstraction at once.

“The notion of depth in a neural network is linked to the idea that you can express something complicated by doing many simple things in sequence,” Rolnick said. “It’s like an assembly line.”

- Foundations Built for a General Theory of Neural Networks (Kevin Hartnett)

In a way, the success of neural network models with increased depth reflect the hierarchical aspects of the phenomena themselves. We end up mirroring nature more closely as we try to economize our models.


Abstraction as Unlearning

Abstraction is not hard because of technical reasons. (On the contrary, abstract things are easier to manipulate due to their greater simplicities.) It is hard because it involves unlearning. (That is why people who are better at forgetting are also better at abstracting.)

Side Note: Originality of the generalist is artistic in nature and lies in the intuition of the right definitions. Originality of the specialist is technical in nature and lies in the invention of the right proof techniques.

Globally, unlearning can be viewed as the Herculean struggle to go back to the tabula rasa state of a beginner's mind. (In some sense, what takes a baby a few months to learn takes humanity hundreds of years to unlearn.) We discard one by one what has been useful in manipulating the world in favor of getting closer to the truth.

Here are some beautiful observations of a physicist about the cognitive development of his own child:

My 2-year old’s insight into quantum gravity. If relative realism is right then ‘physical reality’ is what we experience as a consequence of looking at the world in a certain way, probing deeper and deeper into more and more general theories of physics as we have done historically (arriving by now at two great theories, quantum and gravity) should be a matter of letting go of more and more assumptions about the physical world until we arrive at the most general theory possible. If so then we should also be able to study a single baby, born surely with very little by way of assumptions about physics, and see where and why each assumption is taken on. Although Piaget has itemized many key steps in child development, his analysis is surely not about the fundamental steps at the foundation of theoretical physics. Instead, I can only offer my own anecdotal observations.

Age 11 months: loves to empty a container, as soon as empty fills it, as soon as full empties it. This is the basic mechanism of waves (two competing urges out of phase leading to oscillation).

Age 12-17 months: puts something in drawer, closes it, opens it to see if it is still there. Does not assume it would still be there. This is a quantum way of thinking. It’s only after repeatedly finding it there that she eventually grows to accept classical logic as a useful shortcut (as it is in this situation).

Age 19 months: comes home every day with mother, waves up to dad cooking in the kitchen from the yard. One day dad is carrying her. Still points up to kitchen saying ‘daddy up there in the kitchen’. Dad says no, daddy is here. She says ‘another daddy’ and is quite content with that. Another occasion, her aunt Sarah sits in front of her and talks to her on my mobile. When asked, Juliette declares the person speaking to her ‘another auntie Sarah’. This means that at this age Juliette’s logic is still quantum logic in which someone can happily be in two places at the same time.

Age 15 months (until the present): completely unwilling to shortcut a lego construction by reusing a group of blocks, insists on taking the bits fully apart and then building from scratch. Likewise always insists to read a book from its very first page (including all the front matter). I see this as part of her taking a creative control over her world.

Age 20-22 months: very able to express herself in the third person ‘Juliette is holding a spoon’ but finds it very hard to learn about pronouns especially ‘I’. Masters ‘my’ first and but overuses it ‘my do it’. Takes a long time to master ‘I’ and ‘you’ correctly. This shows that an absolute coordinate-invariant world view is much more natural than a relative one based on coordinate system in which ‘I’ and ‘you’ change meaning depending on who is speaking. This is the key insight of General Relativity that coordinates depend on a coordinate system and carry no meaning of themselves, but they nevertheless refer to an absolute geometry independent of the coordinate system. Actually, once you get used to the absolute reference ‘Juliette is doing this, dad wants to do that etc’ it’s actually much more natural than the confusing ‘I’ and ‘you’ and as a parent I carried on using it far past the time that I needed to. In the same way it’s actually much easier to do and teach differential geometry in absolute coordinate-free terms than the way taught in most physics books.

Age 24 months: until this age she did not understand the concept of time. At least it was impossible to do a bargain with her like ‘if you do this now, we will go to the playground tomorrow’ (but you could bargain with something immediate). She understood ‘later’ as ‘now’.

Age 29 months: quite able to draw a minor squiggle on a bit of paper and say ‘look a face’ and then run with that in her game-play. In other words, very capable of abstract substitutions and accepting definitions as per pure mathematics. At the same time pedantic, does not accept metaphor (‘you are a lion’ elicits ‘no, I’m me’) but is fine with similie, ‘is like’, ‘is pretending to be’.

Age 31 months: understands letters and the concept of a word as a line of letters but sometimes refuses to read them from left to right, insisting on the other way. Also, for a time after one such occasion insisted on having her books read from last page back, turning back as the ‘next page’. I interpret this as her natural awareness of parity and her right to demand to do it her own way.

Age 33 months (current): Still totally blank on ‘why’ questions, does not understand this concept. ‘How’ and ‘what’ are no problem. Presumably this is because in childhood the focus is on building up a strong perception of reality, taking on assumptions without question and as quickly as possible, as it were drinking in the world.

... and just in the last few days: remarked ‘oh, going up’ for the deceleration at the end of going down in an elevator, ‘down and a little bit up’ as she explained. And pulling out of my parking spot insisted that ‘the other cars are going away’. Neither observation was prompted in any way. This tells me that relativity can be taught at preschool.

- Algebraic Approach to Quantum Gravity I: Relative Realism (S. Majid)


Abstraction for Survival

The idea, according to research in Psychology of Aesthetics, Creativity, and the Arts, is that thinking about the future encourages people to think more abstractly—presumably becoming more receptive to non-representational art.

- How to Choose Wisely (Tom Vanderbilt)

Why do some people (like me) get deeply attracted to abstract subjects (like Category Theory)?

One of the reasons could be related to the point made above. Abstract things have higher chances of survival and staying relevant because they are less likely to be affected by the changes unfolding through time. (Similarly, in the words of Morgan Housel, "the further back in history you look, the more general your takeaways should be.") Hence, if you have an hunger for timelessness or a worry about being outdated, then you will be naturally inclined to move up the abstraction chain. (No wonder why I am also obsessed with the notion of time.)

Side Note: The more abstract the subject, the less community around it is willing to let you attach your name to your new discoveries. Why? Because the half-life of discoveries at higher levels of abstraction is much longer and therefore your name will live on for a much longer period of time. (i.e. It makes sense to be prudent.) After being trained in mathematics for so many years, I was shocked to see how easily researchers in other fields could “arrogantly” attach their names to basic findings. Later I realized that this behavior was not out of arrogance. These fields were so far away from truth (i.e. operating at very low levels of abstraction) that half-life of discoveries were very short. If you wanted to attach your name to a discovery, mathematics had a high-risk-high-return pay-off structure while these other fields had a low-risk-low-return structure.

But the higher you move up in the abstraction chain, the harder it becomes for you to innovate usefully. There is less room to play around since the objects of study have much fewer properties. Most of the meaningful ideas have already been fleshed out by others who came before you.

In other words, in the realm of ideas, abstraction acts as a lever between probability of longevity and probability of success. If you aim for a higher probability of longevity, then you need to accept the lower probability of success.

That is why abstract subjects are unsuitable for university environments. The pressure of "publish or perish" mentality pushes PhD students towards quick and riskless incremental research. Abstract subjects on the other hand require risky innovative research which may take a long time to unfold and result in nothing publishable.

Now you may be wondering whether the discussion in the previous section is in conflict with the discussion here. How can abstraction be both a process of unlearning and a means for survival? Is not the evolutionary purpose of learning to increase the probability of survival? I would say that it all depends on your time horizon. To survive the immediate future, you need to learn how your local environment operates and truth is not your primary concern. But as your time horizon expands into infinity, what is useful and what is true become indistinguishable, as your environment shuffles through all allowed possibilities.

order out of noise

Firehose of falsehoods and diverse contradicting statements distributed through many different channels create completely noise environment. Through this noise everyone weaves their own story. That of course happens to be the one that they want to believe in the most. Since most of these facts are pro government. The end result is a diverse array of personalised strongly held pro government stories!

- Networked Propaganda and Counter Propaganda (Jonathan Stray)

Noise contains all possible structures as subsets of itself. Since we are naturally predisposed to recognizing patterns (i.e. compressing information), it can act as a "mirror" reflecting our complex web of pre-existing cognitive short-cuts and biases back to ourselves.

For instance, we can not help but hallucinate ordered structures out of visual and auditory noise. (e.g. pareidolia) I myself experienced a few such bizarre moments, involuntarily scalping out complex musical pieces out of random environmental noise. These episodes were inspiring but also quite intimidating. (I felt as if my unconsciousness accidentally leaked into my consciousness. Is this why some people enjoy listening to noise music?)

If you are willing to experiment with drugs like LSD, you can simulate such experiences using the brain’s own background noise:

Sometimes patterns can arise spontaneously from the random firing of neurons in the cortex — internal background noise, as opposed to external stimuli — or when a psychoactive drug or other influencing factor disrupts normal brain function and boosts the random firing of neurons. This is believed to be what happens when we hallucinate.

- A Math Theory for Why People Hallucinate (Jennifer Ouellette)


Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture.

- Amusing Ourselves to Death (Neil Postman)

Jonathan's observation above is enlightening in the sense that it situates Orwell and Huxley as two ends of a single spectrum for controlling public opinion.

  • Orwell: Dictate people by spoon feeding them your point of view

  • Huxley: Incapacitate people by drowning them in complete noise

Both strategies backfire in the long run:

  • Orwellian regimes can quickly unfold when information from the outside world starts leaking in.

  • Huxleyan regimes shut people completely away from serious matters and this can set in motion a cultural backlash.

The best strategy is a mixed one: Drown people in variations of your point of view so that

  • the information environment stays rich enough to make people feel as if there is an open public discourse

  • the proportion of trivia among the narrations in circulation stays below the critical threshold that can set in motion a cultural backlash.

This way, all the stories people weave out of the pseudo random environment you created will stay close to your point of view.

Note that, as technology progresses and information flows at greater speeds, it becomes harder and harder to maintain an Orwellian regime as opposed to a Huxleyan one.

data as mass

The strange thing about data is that they are an inexhaustible resource: the more you have, the more you get. More information lets firms develop better services, which attracts more users, which in turn generate more data. Having a lot of data helps those firms expand into new areas, as Facebook is now trying to do with online dating. Online platforms can use their wealth of data to spot potential rivals early and take pre-emptive action or buy them up. So big piles of data can become a barrier to competitors entering the market, says Maurice Stucke of the University of Tennessee.

The Economist - A New School in Chicago

Greater centralization of internet and growing importance of data are basically two sides of the same coin. Data is like mass and therefore is subject to gravitation-like dynamics. Huge data-driven companies like Google and Facebook can be thought of as mature galaxy formations.

Light rays travel through fiber optic cables to transfer data and cruise across vast intergalactic voids to stitch together a causally integrated universe.

suboptimality of monogamy

Biological systems are replete with barbell strategies. Take the following mating approach, which we call the 90 percent accountant, 10 percent rock star. Females in the animal kingdom, in some monogamous species (which include humans), tend to marry the equivalent of the accountant, or, even more colorless, the economist, someone stable who can provide, and once in a while they cheat with the aggressive alpha, the rock star, as part of a dual strategy. They limit their downside while using extrapair copulation to get the genetic upside, or some great fun, or both. Even the timing of the cheating seems nonrandom, as it corresponds to periods of high likelihood of pregnancy. We see evidence of such a strategy with the so-called monogamous birds: they enjoy cheating, with more than a tenth of the broods coming from males other than the putative father. The phenomenon is real, but the theories around it vary. Evolutionary theorists claim that females want both economic-social stability and good genes for their children. Both cannot be always obtained from someone in the middle with all these virtues (though good gene providers, those alpha males aren't likely to be stable, and vice versa). Why not have the pie and eat it too? Stable life and good genes.
Antifragile - Nassim Nicholas Taleb (Pages 162-163)

True monogamy is evolutionarily suboptimal. This will be painfully apparent in near future when everyone's genome will be sequenced at birth.

Note that the most optimal strategy highlighted by Taleb is accessible only in an incomplete-information environment. If everyone could see the genetic landscape, the strategy would fail because

  • alpha fathers would not be open to the idea of their children being reared by others and
  • non-alpha fathers would not want to spend their resources to rear others' children.

an information consumption guide

The amount of information created daily is huge. You need to set up some proper filters to preserve your sanity. Here is my personal advice:

- For your deep readings, follow a professional human curator who has similar interests as you do. (You need someone whose actual job is curation. Pay if necessary.) Algorithmic machine curators are horrible at spotting high-quality articles, because they use inputs generated by the general public and the general public has absolutely no taste or depth.

- For your news reading, follow an algorithmic curator. Machines are much better at quickly scanning large databases and bringing together a comprehensive and timely feed. You will not experience much downside neither since the importance of a piece of news is highly correlated with its popularity.

Note that, in both cases, other human readers are utilised to do the most mechanical part of the job for you. This means that if these advices are followed by everyone, then they will no longer work. Machine curators will have no signals to work with and human curators will wait for others to curate.

deliberate vagueness

Deliberate vagueness can have three major strategic payoffs:

Greater Longevity: If you stay abstract and employ lots of symbolisms, your text has higher chances of surviving many generations. Religious leaders do this all the time.

Wider Appeal: If you use proxies rather than saying exactly what you want to say, your text will admit a wider set of interpretations and thereby enjoy a greater likelihood of being related to. Poets do this all the time.

Less Accountability: By saying lots of things but actually saying nothing you can evade accountability altogether. Politicians do this all the time.

speed of information and depth of ignorance

I like going to cafes by myself, just to have some tea and let my mind wander around. Often I end up eavesdropping the nearby conversations. Sometimes the level of ignorance I witness is stupefying.

The fact that these conversations are sprinkled with Facebook videos shown across the tables indicates to me that the increased speed of information is responsible for this.

Our minds are quite defenceless against junk. In fact, when not armed with a good common sense and good education, our minds attract junk like a giant black hole. The greater the amount of junk circulating around us, the greater our junk density becomes.

In other words, the increased speed of good information does not have a cancelling effect on the increased speed of junk, because we attract junk much faster.

information is politics

In politics you can only see the tip of the iceberg. The rest you complete with your intuition and biases.

This is true even if you are a member of a political circle. There is always another circle inside yours or flying right above your head.

The problem gets worse as the society becomes more wired and therefore more susceptible to provocation. As information travels faster, so does misinformation.

Next time you are sitting on your ass sharing or marching on the streets protesting some piece of news, bear in mind that you may be executing someone else's plan.

ignorance and bias

Total ignorance is often represented as a uniform probability distribution over all possible states of nature. (Strictly speaking, even this is not total ignorance since you at least know what all the possible states are.)

Can ignorance have a non-uniform form?

There can be two reasons why I believe that an event is more likely to happen than another:

(i) I am drawing on from some previous knowledge, meaning that I am biased. In other words, I am not in a state of total ignorance.

(ii) There are some physical constraints that shape the context of the event. For example, the final position of a cannon ball is a function of the direction of the cannon and the shape of the landscape. Even if the cannon is uniformly likely to shoot in any direction, the final position is not uniformly distributed across the landscape. In this case, the deviation from the uniform probability distribution has an outside (physical) origin.

So ignorance is always uniform from within, but its manifestation can be non-uniform due to external physical factors. (Hence the Bayesian interpretation of quantum physics is wrong.)