ontological parsimony

First Remark

Since primary school we have been told that there are sets and there are points. There is actually no such separation in mathematics. In axiomatic (Zermelo-Fraenkel) set theory there is only one type of entity: Sets.

By s∈S mathematicians actually mean that "s" is set that belongs to set "S". The axiom of foundation outlaws the existence of any downward infinite membership chains. This eliminates pathological cases where a set sits inside a set sits inside a set sits inside a set... Ad infinitum.

Of course you did not have worry about such technical issues during primary school. The elements of sets were points which were drawn on the blackboard as simple dots. Sets could be subsets of other sets, but they were not supposed to be elements of other sets. Since points obviously did not contain anything, formation of downward infinite membership chains was impossible.

A set is almost propertiless entity. The only property it has is its ability to belong to another set. Hence, from the perspective of bundle theory, axiomatic set theory has one of the simplest ontological landscapes possible (i.e. a landscape with only one allowed property):

Bundle theory, originated by the 18th century Scottish philosopher David Hume, is the ontological theory about objecthood in which an object consists only of a collection (bundle) of properties, relations or tropes. According to bundle theory, an object consists of its properties and nothing more: thus neither can there be an object without properties nor can one even conceive of such an object; for example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or at least one other of its properties. Thus, the theory asserts that the apple is no more than the collection of its properties.


Second Remark

In its attempt at axiomatization, set theory can never specifically capture what is intended to be captured. Even the most accepted axiomatization of set theory (namely the Zermelo-Frankael set theory) lends itself to non-standard interpretations. This fact is frequently exploited by set theorists in the employment of a technique called forcing:

Many formal independence proofs consist in the construction of models which we recognize to be different from the intended notion. It is a fact of experience that one can be honest about such matters! When we are shown a "non-standard" model we can honestly say that it was not intended... If it so happens that the intended notion is not formally definable this may be a useful thing to know about the notion, but it does not cast doubt on its objectivity.

- Georg Kreisel (Quotation is from Tool and Object, p.14.)


(Here "intended notion" is the intuition-based, informal set theory.)

For example, showing that a statement P is undecidable within ZF can be done by constructing two models of ZF, one where P holds and one where P fails. (A model of ZF, by definition, satisfies the axioms of ZF. Hence, if P or its negation ~P could be derived from ZF, then one of these two models could not exist. In other words, existence of the two models implies undecidability of P within ZF.) These constructions sometimes turn out to be quite exotic, in no way resembling what set theory intuitively should be like.

Same situation is true for the first-order axiomatization of arithmetic. This axiomatization (which consists of the Peano axioms slightly weakened so that they can be expressed in first-order logic) lends itself to non-standard interpretations as well. In other words, there are models that satisfy the axioms but do not at all look like the natural numbers.

[Hilbert] thinks of a formal theory as construed up to isomorphism of its interpretations. But what guaranties that a given theory like that of Grundlagen is indeed formal in this sense, i.e. that all its models are in fact isomorphic? Categoricity is commonly viewed as a desired property of formal theories. However in 20-th century people have learnt to be tolerant to the lack of categoricity. For Zermelo-Frenskel axiomatic set theory (ZF), Peano Arithmetic (PA) and some other theories commonly viewed as important turned to be non-categorical. To preclude the right of these theories to be qualified as formal on this ground would apparently mean to go too far. To save the situation philosophers invented the notion of "intended model", that is of model chosen among others on an intuitive basis. Isn't this ironic that such a blunt appeal to intuition is made in the core of formal axiomatic method? I'm agree with F. Davey who recently argued that "no-one has ever been able to explain exactly what they mean by intended model".

Rodin - On Categorical Theory-Building


Occam's Razor

Definition of Occam's Razor: "When competing hypotheses are equal in other respects, the principle recommends selection of the hypothesis that introduces the fewest assumptions and postulates the fewest entities while still sufficiently answering the question." (Source)

Here is Bell speaking on the foundations of quantum mechanics:

"In physics the only observations we must consider are position observations, if only the positions of instrument pointers. It is a great merit of the de Broglie-Bohm picture to force us to consider this fact. If you make axioms, rather than definitions and theorems, about the "measurement" of anything else, then you commit redundancy and risk inconsistency." (Source)

Parsimony is an important guiding principle for science. It increases the clarity of presentation and decreases the chances of inconsistency. Nevertheless, there are strong grounds for not unconditionally embracing this principle:

"In the scientific method, parsimony is an epistemological, metaphysical or heuristic preference, not an irrefutable principle of logic, and certainly not a scientific result... As a logical principle, Occam's razor would demand that scientists accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often supports more complex theories than existing data. Science tends to prefer the simplest explanation that is consistent with the data available at a given time, but history shows that these simplest explanations often yield to complexities as new data become available. Science is open to the possibility that future experiments might support more complex theories than demanded by current data and is more interested in designing experiments to discriminate between competing theories than favoring one theory over another based merely on philosophical principles... It has been suggested that Occam’s razor is a widely accepted example of extraevidential consideration, even though it is entirely a metaphysical assumption. There is little empirical evidence that the world is actually simple or that simple accounts are more likely than complex ones to be true."(Source)

Occam's Razor has been very fruitful for the development of science. Nevertheless scientists are cautious with it. What about mathematicians?

Set theorists have been employing Occam's Razor vehemently. (Recall the above remarks about their attempts to boil everything down to the least number of axioms, and their absolute commitment to ontological parsimony.) But while doing so, they have missed the main goal which was to establish a conceptually enlightening foundation for mathematics. Instead they have ended up slicing mathematics into an unrecognisable mess. And for the hope of economizing some irrelevant parameters, they have introduced unnatural concepts such as "intended model" and "axiom of foundation".

Mathematicians desire conceptual parsimony. Moreover, unlike physicists, they can afford to sacrifice ontological parsimony because they are only bound by the limits of their own imagination. And human mind has proved itself to be capable of handling high degrees of ontological diversity without any problems.