I leave it to your judgement whether the obvious qualitative differences hint at existence of a "structured randomness" underlying the first picture.
CHIMERA STATES
It is nice to have total synchrony. But is it not better to have synchrony and noise coexisting so that the latter can enrich the evolution of the former?
Experimental observation of such states in natural systems, neural or not, would also be extremely informative. As a matter of fact, although chimera states do not need extra structure to exist, they are not destroyed by small disorder either, which certainly strengthens their prospects for real systems. More important, additional structure can lead to a myriad of other possible behaviours, including quasiperiodic chimeras and chimeras that “breathe”, in the sense that coherence in the desynchronized population cycles up and down.
Motter - Spontaneous Synchrony Breaking
This discussion has relevance for even wealth redistribution debates. Should we let the random market forces determine the distributive outcome, or should we interfere and equalize all incomes?
Differences in wealth can seem unfair. But as we increase our understanding of complex systems, we are discovering that diversity and its associated tensions are an essential fuel of the life of these systems. A moderate income disparity (Gini between 0.25 and 0.4) encourages entrepreneurship in the economy—much lower appears to stifle dynamism, but much higher appears to engender a negativity that is not productive. A certain degree of randomness is another necessary ingredient for the vitality of a system. In many sectors, a successful enterprise requires dynamics of increasing returns as well as a good dose of luck, in addition to skill and aptitude. These vital ingredients of diversity and randomness can often seem at odds with ideals of ‘fairness’. On the flip side, too much diversity and randomness elicit calls for regulation to control the excesses.
Kitterer - Die Ausgestaltung der Mittelzuweisungen im Solidarpakt II
Here the "synchronised" region corresponds to the stable population region where there is essentially no movement between income classes. The "de-synchronised" region, on the other hand, corresponds to the unstable population region where individuals move up and down the income classes. The existence of the unstable regions requires a prior "symmetry breakdown". (This, in the previous case, was called "synchrony breakdown".) For instance, upon witnessing the recycling of individuals from rags to riches and from riches to rags, people in the stable region will be more willing to believe in the possibility of a change and more ready to exhibit assertive and innovative behaviour. This will, in turn, cause further "symmetry breakdown".
CHAOS vs RANDOMNESS
Of course it is tough to prove that an experimental phenomenon is actually governed by a structured random process. From the final signal alone, it may be impossible to disentangle the deterministic component from the stochastic one. (The words "stochastic" and "random" are synonyms.)
Moreover, deterministic systems can get very chaotic, and can generate final signals that look stochastic. Also, even if the underlying dynamics is totally deterministic, the time series generated by the dynamics may exhibit some noise due to the deficiencies in the experimental set-up.
All methods for distinguishing deterministic and stochastic processes rely on the fact that a deterministic system always evolves in the same way from a given starting point. Thus, given a time series to test for determinism, one can:
1- Pick a test state;
2- Search the time series for a similar or 'nearby' state; and
3- Compare their respective time evolutions.
Define the error as the difference between the time evolution of the 'test' state and the time evolution of the nearby state. A deterministic system will have an error that either remains small (stable, regular solution) or increases exponentially with time (chaos). A stochastic system will have a randomly distributed error. (Source)
Of course, the task gets a lot more complicated when there is interaction between the stochastic and deterministic components of the observed phenomenon:
When a non-linear deterministic system is attended by external fluctuations, its trajectories present serious and permanent distortions. Furthermore, the noise is amplified due to the inherent non-linearity and reveals totally new dynamical properties. Statistical tests attempting to separate noise from the deterministic skeleton or inversely isolate the deterministic part risk failure. Things become worse when the deterministic component is a non-linear feedback system. In presence of interactions between nonlinear deterministic components and noise, the resulting nonlinear series can display dynamics that traditional tests for nonlinearity are sometimes not able to capture. (Source)
KNIGHTIAN UNCERTAINTY
Even a pure random process in mathematics comes with structure. Although this is intuitively obvious, many students completely miss it.
Mathematics can not deal with absolute uncertainty. For instance, in order to define a stochastic process, one needs to know not just the distribution of its possible values for each point in time but also the statistical relationships between each pair of these distributions. Most people on the street would not call such a process "random". Our intuitive notion of randomness is equivalent to "complete lack of structure". From this point of view, the only kind of randomness that mathematics can talk about is "structured randomness".
There are knowns, known unknowns and unknown unknowns. Mathematics can only deal with knowns and known unknowns.
What about unknown unknowns? In other words, what if the phenomenon at hand has no structure at all? Then we are clueless. Then we have "chaos" as Dante would have used the term. Scary, but surprisingly common! In fact, economists gave it a special name: Knightian Uncertainty.
Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.... The essential fact is that 'risk' means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.... It will appear that a measurable uncertainty, or 'risk' proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.
Knight - Risk, Uncertainty and Profit (Page 19)
Economists do not like to publish research papers on Knightian Uncertainty because the subject does not lend itself to much analysis.
Paul Samuelson once went so far as to argue that economics must surrender its pretensions to science if it cannot assume the economy is “ergodic”, which is a fancy way of saying that Fortune’s wheel will spin tomorrow much as it did today (and that tomorrow's turn of the wheel is independent of today's). To relax that assumption, Mr Samuelson has argued, is to take the subject “out of the realm of science into the realm of genuine history”.
The scientific pose has great appeal. But this crisis is reminding us again of its intellectual costs. Knightian uncertainty may be fiendishly hard to fathom, but ignoring it, as economists tend to do, makes other phenomena devilishly hard to explain. The thirst for liquidity—the sudden surge in the propensity to hoard—is one example. If risks are calculable, then investors will place their bets and roll the dice. Only if they are incalculable will they try to take their chips off the table altogether, in a desperate scramble for cash (or near-cash). As Keynes put it, “our desire to hold money as a store of wealth is a barometer of the degree of our distrust of our own calculations and conventions concerning the future.”
Olivier Blanchard - (Nearly) Nothing to Fear But Fear Itself
MATHEMATICS ITSELF
What about mathematics itself? In some sense, mathematics is the study of a large set of artificially introduced structured randomnesses. Definitions are made to delineate the objects of study, and these definitions often lie on a subtle mid-ground between generality and specificity. You do not want your object to be too general. Otherwise you will not be able to make numerous interesting deductions about it. You do not want your object to be too specific neither. Otherwise the deduced results will have very little significance in terms of applicability and universality.
Now let's try to replace the words specific with "overly-determined" and general with "overly-random".
If your definition is too general, the possibility of a "randomly picked" structure (from the class of all possible mathematical structures) fitting your description is very high. In other words, a highly randomized mechanism will pick up satisfactory objects with a high probability of success. If your definition is too specific, then such a randomized mechanism will fail miserably. In that case, you will need a procedure with a deterministic component that can seek out the few satisfactory structures among the many.
It is tough to make these ideas mathematically precise. For example, what does it mean to say that there are less commutative rings than rings? Counting becomes a tricky business when you are dealing with different magnitudes of infinities. Recall Ostrowski's Theorem: "Up to isomorphisms which preserve absolute values, real numbers and complex numbers are the only fields that are complete with respect to an archimedean absolute value." Hence, intuitively it makes sense to say that there are more "fields" than "fields that are complete with respect to an achimedean absolute value." But again we are dealing isomorphism classes here. In other words, we are facing collections that are too large to be considered as sets! Even if we cap the collection above by restricting ourselves to sets of cardinality less than some K, we still can not legitimately say that there are "more" fields.
One way to make our use of "more than" mathematically precise is as follows: If all instances of A are also instances of B, then we say there are more B's than A's. This will allow us to circumvent the size issues. (This is actually the perspective adopted in Model Theory: Theory T is nested inside Theory S if every model of S is also a model of T.) This approach has a serious drawback though: We can not compare objects that are not nested inside each other!
"One way to understand a complicated object is to study properties of a random thing which matches it in a few salient features,"
- Persi Diaconis
Diaconis said this in relation to the physicists' use of random matrices as models of complex quantum systems. (By the way, the referred quantum phenomena are mysteriously related to the distribution of prime numbers.) Nevertheless the remark suits our context as well. For instance, in order to understand integers we study rings, integral domains etc. The set of integers is an "overly-determined" object, while ring is a "critically-random" object that satisfies some important features of integers. Integers form a ring, but not every ring is the set of integers.