structured randomness

People tend to think that the evolution of a system is either deterministic or random. This dichotomy is mistaken. There are a lot of possible dynamics in between.


EXAMPLES

Physics: In Quantum Mechanics, the wave function evolves across time in a deterministic fashion as dictated by the Schrodinger equation. In other words the uncertainty is deterministically constrained.

Finance: In Black-Scholes model, volatility and drift of the underlying processes are deterministic functions of time. In other words time-evolution of the random processes have deterministic structure.

Biology 1: Invasive bacteria employ controlled genetic mutation to evade the immune systems of host bodies:

Haemophilus influenzae frequently manages to evade its host's ever-changing defenses, and also copes with the varied environments it encounters in different parts of the host's body. It does so because it possesses what Richard Moxon has called "contingency genes." These are highly mutable genes that code for products that determine the surface structures of the bacteria. Because they are so mutable, subpopulations of bacteria can survive in the different microhabitats within their host by changing their surface structures. Moreover, by constantly presenting the host's immune system with new surface molecules that it has not encountered before and does not recognize, the bacteria may evade the host's defenses... What, then, is the basis for the enormous mutation rate in these contingency genes? Characteristically, the DNA of these genes contains short nucleotide sequences that are repeated again and again, one after the other. This leads to a lot of mistakes being made as the DNA is maintained and copied... It is difficult to find an appropriate term for the type of mutational process that occurs in contingency genes. Moxon refers to it as "discriminate mutation", and the term "targeted" mutation may also be appropriate. Whatever we call it, there is little doubt that it is a product of natural selection: lineages with DNA sequences that lead to a high mutation rate in the relevant genes survive better than those with less changeable sequences. Although the changes that occur in the DNA of the targeted region are random, there is adaptive specificity in targeting the mutations in the first place.

- Eva Jablonka - Evolution in Four Dimensions (Pages 95-96)

Of course, immune systems also work in a similar fashion to counter these attacks, resulting in a never ending cat-and-mouse game. They can not store every information about every single attack experienced in the past. Instead, what they do is to store the essential motifs and complete the remaining missing pieces to achieve perfect fit via statistical honing-in processes. This approach, by the way, results in faster pattern recognition with some more false positives along the way.

Biology 2: “When sharks and other ocean predators can’t find food, they abandon Brownian motion, the random motion seen in swirling gas molecules, for Lévy flight — a mix of long trajectories and short, random movements found in turbulent fluids.” (Source)

Brownian Motion

Brownian Motion

Levy Flight

Levy Flight

Perhaps you should plan your career through a Lévy flight as well. Otherwise finding your passion within the abstract space of all possible endeavors will take a very long time! (Notice that randomization periods during a Levy flight have a natural association with mental stress, which is actually what leads to behavioral randomness in the first place.)

Biology 3: Critically random geological structures give rise to the most diverse ecological systems:

We have built civilization’s cornerstones on amorphous, impermanent stuff. Coasts, rivers, deltas, and earthquake zones are places of dramatic upheaval. Shorelines are constantly being rewritten. Rivers fussily overtop their banks and reroute themselves. With one hand, earthquakes open the earth, and with the other they send it coursing down hillsides. We settled those places for good reason. What makes them attractive is the same thing that makes them dangerous. Periodic disruption and change is the progenitor of diversity, stability, and abundance. Where there is disaster, there is also opportunity. Ecologists call it the “intermediate disturbance hypothesis.”

The intermediate disturbance hypothesis is one answer to an existential ecological question: Why are there so many different types of plants and animals? The term was first coined by Joseph Connell, a professor at UC Santa Barbara, in 1978. Connell studied tropical forests and coral reefs, and during the course of his work, he noticed something peculiar. The places with the highest diversity of species were not the most stable. In fact, the most stable and least disturbed locations had relatively low biodiversity. The same was true of the places that suffered constant upheaval. But there, in the middle, was a level of disturbance that was just right. Not too frequent or too harsh, but also not too sparing or too light. Occasional disturbances that inflict moderate damage are, ecologically speaking, a good thing.

Tim de Chant - Why We Live in Dangerous Places

Computer Science: Evolution explores fitness landscapes in a pseudo-random fashion. We assist our neural networks so that they can do the same.

... backpropagation neural networks cannot always be trusted to find just the right combination of connection weights to make correct classifications because of the possibility of being trapped in local minima. The instruction procedure is one in which error is continually reduced, like walking down a hill, but there is no guarantee that continuous walking downhill will take you to the bottom of the hill. Instead, you may find yourself in a valley or small depression part way down the hill that requires you to climb up again before you can complete your descent. Similarly, if a backpropagation neural network finds itself in a local minimum of error, it may be unable to climb out and find an adequate solution that minimizes the errors of its classifications. In such cases, one may have to start over again with a new set of initial random connection weights, making this a selectionist procedure of blind variation and selection.

Gary Cziko - Without Miracles (Page 255)

Mathematics: Various theorems hint at both the existence and lack of structure in the distribution of primes among the natural numbers.

There are two facts about the distribution of prime numbers of which I hope to convince you so overwhelmingly that they will be permanently engraved in your hearts. The first is that, despite their simple definition and role as the building blocks of the natural numbers, the prime numbers grow like weeds among the natural numbers, seeming to obey no other law than that of chance, and nobody can predict where the next one will sprout. The second fact is even more astonishing, for it states just the opposite: that the prime numbers exhibit stunning regularity, that there are laws governing their behavior, and that they obey these laws with almost military precision

- Don Zagier (Source)

You can of course enlist all primes using a deterministic algorithm that one-by-one goes through all positive integers and checks whether each is divisible by only one and itself. But this algorithm does not tell you where the next prime will be. In order to be able to know where the next prime is, you need to know where all the primes are! In other words, you need a deterministic description of the whole set of prime numbers at once. (Being able to list the primes does not help.) So far no such description has been found.

On the other hand, prime numbers is not a completely random creature carved out of natural numbers. For instance, we know that as you get away and away from zero, primes get sparser at a certain rate. (This is implied by the Prime Number Theorem.) Hence a pick among smaller positive integers is more likely to yield a prime.

(For an illustrative comparison consider "the set of primes" and "the set of all integers divisible by 3". You can generate each of these two sets using short algorithms. Hence, viewed as strings, they have similar algorithmic complexities. But generation of the former set will take place at a slower rate. Therefore, if one measures relative complexity by comparing the speeds of generating algorithms, primes look more complex.)

Here is how the first 14,683 primes look like on the Ulam Spiral:

And here is how a randomly-selected 14,683 odd numbers look like on the Ulam Spiral (Recall that all primes are located among the odd numbers.):

I leave it to your judgement whether the obvious qualitative differences hint at existence of a "structured randomness" underlying the first picture.


CHIMERA STATES

It is nice to have total synchrony. But is it not better to have synchrony and noise coexisting so that the latter can enrich the evolution of the former?

Experimental observation of such states in natural systems, neural or not, would also be extremely informative. As a matter of fact, although chimera states do not need extra structure to exist, they are not destroyed by small disorder either, which certainly strengthens their prospects for real systems. More important, additional structure can lead to a myriad of other possible behaviours, including quasiperiodic chimeras and chimeras that “breathe”, in the sense that coherence in the desynchronized population cycles up and down.

Motter - Spontaneous Synchrony Breaking

This discussion has relevance for even wealth redistribution debates. Should we let the random market forces determine the distributive outcome, or should we interfere and equalize all incomes?

Differences in wealth can seem unfair. But as we increase our understanding of complex systems, we are discovering that diversity and its associated tensions are an essential fuel of the life of these systems. A moderate income disparity (Gini between 0.25 and 0.4) encourages entrepreneurship in the economy—much lower appears to stifle dynamism, but much higher appears to engender a negativity that is not productive. A certain degree of randomness is another necessary ingredient for the vitality of a system. In many sectors, a successful enterprise requires dynamics of increasing returns as well as a good dose of luck, in addition to skill and aptitude. These vital ingredients of diversity and randomness can often seem at odds with ideals of ‘fairness’. On the flip side, too much diversity and randomness elicit calls for regulation to control the excesses.

Kitterer - Die Ausgestaltung der Mittelzuweisungen im Solidarpakt II

Here the "synchronised" region corresponds to the stable population region where there is essentially no movement between income classes. The "de-synchronised" region, on the other hand, corresponds to the unstable population region where individuals move up and down the income classes. The existence of the unstable regions requires a prior "symmetry breakdown". (This, in the previous case, was called "synchrony breakdown".) For instance, upon witnessing the recycling of individuals from rags to riches and from riches to rags, people in the stable region will be more willing to believe in the possibility of a change and more ready to exhibit assertive and innovative behaviour. This will, in turn, cause further "symmetry breakdown".


CHAOS vs RANDOMNESS

Of course it is tough to prove that an experimental phenomenon is actually governed by a structured random process. From the final signal alone, it may be impossible to disentangle the deterministic component from the stochastic one. (The words "stochastic" and "random" are synonyms.)

Moreover, deterministic systems can get very chaotic, and can generate final signals that look stochastic. Also, even if the underlying dynamics is totally deterministic, the time series generated by the dynamics may exhibit some noise due to the deficiencies in the experimental set-up.

All methods for distinguishing deterministic and stochastic processes rely on the fact that a deterministic system always evolves in the same way from a given starting point. Thus, given a time series to test for determinism, one can:

1- Pick a test state;
2- Search the time series for a similar or 'nearby' state; and
3- Compare their respective time evolutions.

Define the error as the difference between the time evolution of the 'test' state and the time evolution of the nearby state. A deterministic system will have an error that either remains small (stable, regular solution) or increases exponentially with time (chaos). A stochastic system will have a randomly distributed error. (Source)

Of course, the task gets a lot more complicated when there is interaction between the stochastic and deterministic components of the observed phenomenon:

When a non-linear deterministic system is attended by external fluctuations, its trajectories present serious and permanent distortions. Furthermore, the noise is amplified due to the inherent non-linearity and reveals totally new dynamical properties. Statistical tests attempting to separate noise from the deterministic skeleton or inversely isolate the deterministic part risk failure. Things become worse when the deterministic component is a non-linear feedback system. In presence of interactions between nonlinear deterministic components and noise, the resulting nonlinear series can display dynamics that traditional tests for nonlinearity are sometimes not able to capture. (Source)


KNIGHTIAN UNCERTAINTY

Even a pure random process in mathematics comes with structure. Although this is intuitively obvious, many students completely miss it.

Mathematics can not deal with absolute uncertainty. For instance, in order to define a stochastic process, one needs to know not just the distribution of its possible values for each point in time but also the statistical relationships between each pair of these distributions. Most people on the street would not call such a process "random". Our intuitive notion of randomness is equivalent to "complete lack of structure". From this point of view, the only kind of randomness that mathematics can talk about is "structured randomness".

There are knowns, known unknowns and unknown unknowns. Mathematics can only deal with knowns and known unknowns.

What about unknown unknowns? In other words, what if the phenomenon at hand has no structure at all? Then we are clueless. Then we have "chaos" as Dante would have used the term. Scary, but surprisingly common! In fact, economists gave it a special name: Knightian Uncertainty.

Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.

Knight - Risk, Uncertainty and Profit (Page 19)

Economists do not like to publish research papers on Knightian Uncertainty because the subject does not lend itself to much analysis.

Paul Samuelson once went so far as to argue that economics must surrender its pretensions to science if it cannot assume the economy is “ergodic”, which is a fancy way of saying that Fortune’s wheel will spin tomorrow much as it did today (and that tomorrow's turn of the wheel is independent of today's). To relax that assumption, Mr Samuelson has argued, is to take the subject “out of the realm of science into the realm of genuine history”.

The scientific pose has great appeal. But this crisis is reminding us again of its intellectual costs. Knightian uncertainty may be fiendishly hard to fathom, but ignoring it, as economists tend to do, makes other phenomena devilishly hard to explain. The thirst for liquidity—the sudden surge in the propensity to hoard—is one example. If risks are calculable, then investors will place their bets and roll the dice. Only if they are incalculable will they try to take their chips off the table altogether, in a desperate scramble for cash (or near-cash). As Keynes put it, “our desire to hold money as a store of wealth is a barometer of the degree of our distrust of our own calculations and conventions concerning the future.”

Olivier Blanchard - (Nearly) Nothing to Fear But Fear Itself


MATHEMATICS ITSELF

What about mathematics itself? In some sense, mathematics is the study of a large set of artificially introduced structured randomnesses. Definitions are made to delineate the objects of study, and these definitions often lie on a subtle mid-ground between generality and specificity. You do not want your object to be too general. Otherwise you will not be able to make numerous interesting deductions about it. You do not want your object to be too specific neither. Otherwise the deduced results will have very little significance in terms of applicability and universality.

Now let's try to replace the words specific with "overly-determined" and general with "overly-random".

If your definition is too general, the possibility of a "randomly picked" structure (from the class of all possible mathematical structures) fitting your description is very high. In other words, a highly randomized mechanism will pick up satisfactory objects with a high probability of success. If your definition is too specific, then such a randomized mechanism will fail miserably. In that case, you will need a procedure with a deterministic component that can seek out the few satisfactory structures among the many.

It is tough to make these ideas mathematically precise. For example, what does it mean to say that there are less commutative rings than rings? Counting becomes a tricky business when you are dealing with different magnitudes of infinities. Recall Ostrowski's Theorem: "Up to isomorphisms which preserve absolute values, real numbers and complex numbers are the only fields that are complete with respect to an archimedean absolute value." Hence, intuitively it makes sense to say that there are more "fields" than "fields that are complete with respect to an achimedean absolute value." But again we are dealing isomorphism classes here. In other words, we are facing collections that are too large to be considered as sets! Even if we cap the collection above by restricting ourselves to sets of cardinality less than some K, we still can not legitimately say that there are "more" fields.

One way to make our use of "more than" mathematically precise is as follows: If all instances of A are also instances of B, then we say there are more B's than A's. This will allow us to circumvent the size issues. (This is actually the perspective adopted in Model Theory: Theory T is nested inside Theory S if every model of S is also a model of T.) This approach has a serious drawback though: We can not compare objects that are not nested inside each other!

"One way to understand a complicated object is to study properties of a random thing which matches it in a few salient features,"
- Persi Diaconis

Diaconis said this in relation to the physicists' use of random matrices as models of complex quantum systems. (By the way, the referred quantum phenomena are mysteriously related to the distribution of prime numbers.) Nevertheless the remark suits our context as well. For instance, in order to understand integers we study rings, integral domains etc. The set of integers is an "overly-determined" object, while ring is a "critically-random" object that satisfies some important features of integers. Integers form a ring, but not every ring is the set of integers.