Monday, November 21, 2022

Chapter 18
Chasing Schrödinger's cat

This chapter added in November 2022
Every now and then, I review an old book. This review written in 2022 concerns a title from 2019, which by my standards is amazingly recent.

Einstein's Unfinished Revolution -- The Search for What Lies Beyond the Quantum (Penguin Random House 2019).

In Unfinished, the physicist Lee Smolin summarizes the fruit of his lifetime quest to undergird quantum theories with a "complete" physical theory that ties together the continuous world of general relativity with the jerky world of quantum mechanics.

Like two of his scientific beacons, Julian Barbour and Roger Penrose, Smolin is quite the iconoclast, having written incisive science critiques that are accessible to the intelligent lay person and which yet hold value for scientist colleagues. For example, in The Trouble with Physics, Smolin tackles the almost monolithic acceptance of string theory among theoretical physicists -- a skepticism shared by Barbour and Penrose. Other stimulating books are his Time Reborn -- From the Crisis in Physics to the Future of the Universe (Houghton Mifflin Harcourt 2013), Three Roads to Quantum Gravity (Basic Books 2001), and The Life of the Cosmos (Oxford 1997, revised edition 1999).

Unfinished's title refers to the fact that it was Albert Einstein, perhaps more so than Max Planck, who kick-started the quantum revolution with his 1905 paper on the photo-electric effect, which was well explained by taking Planck's notion of energy quantum (indivisible smallest unit) "literally." Twenty-five years later Einstein could not accept that the new quantum theory sufficed as a description of atomic and subatomic events because his philosophy of what is sometimes called "naive realism" was offended. He and Niels Bohr argued for decades about this topic.

Eventually, in the early 1980s, experiments by Alain Aspect established that the principle of locality (no "spooky action at a distance") is false and that what is now called entanglement (spooky action at a distance) holds for certain pairs of particles.

Smolin, I think, does a great service when he takes time to go through several of the "interpretations" of quantum mechanics. Interpretations, at least at first glance, tend to be independent because one explanation seems as good as another because of the difficulty in testing them. I'm at least somewhat familiar with several of the interpretations he discussed, and yet I learned something new from each discussion. The acuity of his understanding is unmissable.

Smolin, though sympathetic to Einstein, faces the fact that ultra-naive realism is violated by entanglement. Yet, he cannot abide the Bohr school (or Copenhagen Interpretation) of "anti-realists," and puts himself squarely within the camp of what I call "anti-idealists" -- those who reject John Wheeler's idea that the human brain/mind are looped into the universe in a "spooky" way. Einstein once  panned this idealist thinking with the question, "Do you really think the moon isn't there when you aren't looking?"

In fact, Erwin Schrödinger dreamt up his notorious cat thought experiment with just that criticism in mind. According to Bohr, the macro-sized measuring device, in the process of intercepting and amplifying a signal so that a human can detect it, in effect "selects" quantum information that previously had been there simultaneously with other possibly correct information. Well then, said Schrödinger, instead of a tick of a geiger counter presenting the detection, why not let a cat's state of being alive or dead serve as the detection event? That is, if the counter detects an emission from a radioactive isotope within 30 seconds, say, the counter's signal is linked to a poison gas container, which opens if the counter goes off.

But this contraption is concealed by a soundproof box. Human observation does not occur until someone opens the box. Does not the Bohr view imply that the cat was both alive and dead until the box was opened? Remember, according to quantum theory, all that can be said is that the isotope's half-life predicts that a particle could be emitted within the 30 seconds with some probability. By the Heisenberg Uncertainty Principle, the particle has both been emitted and not been emitted -- until it is detected.

Smolin has endeavored to sever the observer from the process, as we used to do in good old Newtonian mechanics. He does so by making time a fundamental reality, with space as a derived delusion(?). The cosmos is made up of Leibnizian-type nads (from Leibniz's monads) which relate to each other in a non-spatial way. Yet Smolin concedes his conceptualization is probably wrong on ground that most new ideas usually are. Plus, I would say he's very persuasive in his devil's advocate role when he questions various anti-idealist notions. For example, on David Bohm's pilot wave idea, Smolin points out that it is disturbing that the wave acts on the particle, but not the converse. That immediately raises suspicion. Where is the reaction one expects with energy conservation?

The fact that, in the quantum regime, energy conservation is a statistical matter does not help much as not even a statistical approach answers that brow-furrower.

Richard Feynman, Smolin says, told him on more than one occasion that his approaches were "not crazy enough" to have much chance of being right. And Smolin agrees that Feynman was right -- about his early work. Well, I cannot hold a candle to either Feynman or Smolin, and yet I venture to add my two-cents' worth here.

The trouble with anti-idealism is that its practitioners prefer an engineering view of physics. I don't mean that as some sort of insult. What I mean is that they want, as much as possible, clean calculations. They want Newton-type simplifications that will serve, and that seem to do the job in virtually all cases. And that's a very good aim. But, that goal isn't all that's there for science.

What is desired by the anti-idealist (who wishes to cut out or down the observer) is linearity. Our routine methods of calculation tend to be linear. In particular, we don't like the notion that the observer becomes so entangled with the machine that one cannot tell where one ends and the other begins. Thus many a scientist wants the physical world "out there" to act independently of the observer.

Yes, of course it is well known that each person's brain has a great deal to do with formation of "subjective reality," but "objective reality" is held to exist as a matter of faith, a faith that would be buttressed if a Smolin-type theory could be tested (which he thinks is possible). Yet I would point out that most of physics is best described by nonlinear differential equations, which in general give us feedback loops, both negative and positive. Mathematical chaos gives examples of positive feedback loops. The asymmetric three-body problem yields mostly chaotic solutions.

Einstein's general theory of relativity has space, time and gravity all interacting in a nonlinear way, tho of course linear approximations are available. Even his earlier special theory had problems of nonlinearity. Whether an observer gets electrocuted or not may depend on his angular velocity with respect to an electromagnetic field. Then there is the case of the oft-misunderstood twins paradox.

My point is: why should the situation that obtains between the observer and the physical world be miraculously nonlinear? It would seem that some sort of idealist solution is far more probable. Yes, you say, but decoherence saves Schrödinger's cat from having a history that is partly due to my or your mind. Yet why should there be a law that says alternate "real" histories are impossible? After all,

Does anyone really know what time is?

Monday, October 25, 2021

Chapter 17
Four colors suffice (proposed short proof)

Terms1 :
A region is a submap, as in R ⊆ M.
A country, as in C ∈ M and possibly C ∈ Ri, is an atomic element of a map. The underline indicates a whole map and not a proper submap.
A graph represents a map with nodes as countries and links as borders.

Graph forms:
The unique graph of 2 regions with a common border is two nodes connected by one link.
The unique graph of 3 mutually bordering regions is a triangle.
The unique graph of 4 mutually bordering regions is a triangle subdivided into 3 triangles.
Basic forms
Notation:
kM2 is a map of 2 regions with a common border and its associated graph is called kG2. The k represents the kth level of expansion (see below).

kM3 and kM4 follow suit.

kMn > 4 = ∅

Preliminary remarks:
We are uninterested in any hanging chain, which is defined as a region composed of M2's, possibly linked to an kM3 or kM4 region.

It is trivial that a proof for a map without the chain suffices for one with it.

We reject pretzel holes as illegal. One would probably not be inclined to draw such a map, but one might perchance wonder about graphs which lack interior links, as follow:
Illegal constructions
It is understood that a maximally complex map of 4n countries may be subdivided into an initial R4 in more than one way. More than one carving up may also be possible for lower level R4's. These possibilities will make no difference to our proof.

Proof:
We wish to work in a fractal-like fashion, subdividing our map into a Level 1 R4. We then expand each node, and subdivide again, so that at Level 2 we have 42, or 16, nodes.

If we have proved our case for any maximally complex map, we have proved our case. That is, any non-trivial map must be a submap of some 4n map. If a paint scheme is proved for the map, then it is proved for any legal submap, since every remaining node can keep its original color.

If a map has some M3's, each is represented by a triangle with no middle node. Hence, we can simply erase a node in a G4. The case of an M2 does not occur since it implies either an external chain or an illegal pretzel hole.

Consider the graph of an R4, where each node represents a region. We call this Level 1.
A graph of a map of 4n countries that has been initially subdivided into a 1R4
For a maximally complex map of 16 countries, we expand the graph thus
Level 2 graph: Each shaded triangle represents a 2R4 graph.
The dotted lines tell us the relevant nodes can be connected by one or two links. The symmetry tells us we don't need to check any other interior linkage possibilities. By proving for the case of all dotted line links, we prove for any legal case of erased dotted links.

A paint scheme, with colors A,B,C,D, is shown.
Painted Level 2 graph
We now step down to Level 3, focusing on one corner triangle from Level 2. We retain the paint scheme for the three corner nodes that have been "shifted" (mentally) from Level 2. That these colors are held constant is important.

The Level 2 interior node that had been painted D is now redrawn as an 3M4. The color D is "pushed down" to the center node of each of the relevant triangles (those nodes are not pictured here).
One of the Level 2 triangles expanded into a Level 3 triangle. Note that the colorization remains the same between levels for the rim nodes.
By holding the corner nodes constant -- as shown -- we can repeat the colorization algorithm down through all levels, ad infinitum. In our algorithm, we always push D to the center nodes of the bottom level triangles.

Done.
1. Not all this nomenclature is necessary for this paper, but it is helpful in keeping concepts distinct.

Sunday, March 7, 2021

Chapter 15
Time thought experiments


This review was written more than a decade ago and I daresay
that some of my thinking on this topic has changed.

Goedel's theorem and a time travel paradox
In How to Build a Time Machine (Viking 2001)††, the physicist Paul Davies gives the 'most baffling of all time paradoxes.' Writes Davies:
A professor builds a time machine in 2005 and decides to go forward ... to 2010. When he arrives, he seeks out the university library and browses through the current journals. In the mathematics section he notices a splendid new theorem and jots down the details. Then he returns to 2005, summons a clever student, and outlines the theorem. The student goes away, tidies up the argument, writes a paper, and publishes it in a mathematics journal. It was, of course, in this very journal that the professor read the paper in 2010.
Davies finds that, from a physics standpoint, such a 'self-consistent causal loop' is possible, but, 'where exactly did the theorem come from?... it's as if the information about the theorem just came out of thin air.'

Davies says many worlds proponent David Deutsch, author of The Fabric of Reality‡ and a time travel 'expert,' finds this paradox exceptionally disturbing, since information appears from nowhere, in apparent violation of the principle of entropy.

This paradox seems well suited to Goedel's main incompleteness theorem, which says that a sufficiently rich formal system if consistent, must be incomplete.

Suppose we assume that there is a formal system T -- a theory of physics -- in which a sentence S can be constructed describing the mentioned time travel paradox.

If S strikes us as paradoxical, then we may regard S as the Goedel sentence of T. Assuming that T is a consistent theory, we would then require that some extension of T be constructed. An extension might, for example, say that the theorem's origin is relative to the observer and include a censorship, as occurs in other light-related phenomena. That is, the professor might be required to forget where he got the ideas to feed his student.

But, even if S is made consistent, there must then be some other sentence S', which is not derivable from T'.

Of course, if T incorporates the many worlds view, S would likely be consistent and derivable from T. However, assuming T is a sufficiently vigorous mathematical formalism, there must still be some other sentence V that may be viewed as paradoxical (inconsistent) if T is viewed as airtight.

How old is a black hole?
Certainly less than the age of the cosmos, you say.

The black hole relativistic time problem illustrates that the age of the cosmos is determined by the yardstick used.

Suppose we posit a pulsar pulsing at the rate T, and distance D from the event horizon of a black hole. Our clock is timed to strike at T/2, so that pulse A has occurred at T=0. We now move the pulsar closer to the event horizon, again with our clock striking at what we'll call T'/2. Now because of the gravitational effect on observed time, the time between pulses is longer. That is T' > T, and hence T'=0 is farther in the past than T=0.

Of course, as we push the pulsar closer to the event horizon, the relative time TN becomes asymptotic to infinity (eternity). So, supposing the universe was born 15 billion years ago in the big bang, we can push our pulsar's pulse A back in time beyond 15 billion years ago by pushing the pulsar closer to the event horizon.

No matter how old we make the universe, we may always obtain a pulse A that is older than the cosmos.

Yes, you say, but a real pulsar would be ripped to shreds and such a happening is not observable. Nevetherless, the general theory of relativity requires that we grant that time calculations can yield such contradictions.

Anthropic issues
A sense of awe often accompanies the observation: 'The conditions for human (or any) life are vastly improbable in the cosmic scheme of things.'

This leads some to assert that the many worlds scenario answers that striking improbability, since in most other universes, life never arose and never will.

I point out that the capacity for the human mind to examine the cosmos is perhaps 2.5 x 104 years old, against a cosmic time scale of 1.5 x 109. In other words, we have a ratio of 2.5(104)/1.5(109) = 1.6/105.

In other words, humanity is an almost invisible drop in the vast sea of cosmic events.

Yet here we are! Isn't that amazing?! It seems as though the cosmos conspired to make our little culture just for us, so we could contemplate its vast mysteries.

However, there is the problem of the constants of nature. Even slight differences in these constants would, it seems, lead to universes where complexity just doesn't happen. Suppose that these constants depend on initial cosmic conditions which have a built-in random variability. In that case, the existence of a universe with just the right constants for life (in particular, humanity) to evolve is nothing short of miraculously improbable. Some hope a grand unified theory will resolve the issue. Others suggest that there is a host of bubble universes, most of which are not conducive to complexity, and hence the issue of improbability is removed (after all, we wouldn't be in one of the barren cosmoses). For more on this issue, see the physicist-writers John Barrow, Frank Tipler and Paul Davies†‡;.

At any rate, it doesn't seem likely that this drop will last long, in terms of cosmic scales, and the same holds for other such tiny drops elsewhere in the cosmos.

Even granting faster-than-light 'tachyon radio,' the probability is very low that an alien civilization exists within communications range of our ephemeral race. That is, the chance of two such drops existing 'simultaneously' is rather low, despite the fond hopes of the SETI crowd.

On the other hand, Tipler favors the idea that once intelligent life has evolved, it will find the means to continue on forever.‡‡

Anyway, anthropomorphism does seem to enter into the picture when we consider quantum phenomena: a person's physical reality is influenced by his or her choices.

† Barrow, John D.; Tipler, Frank J. The Anthropic Cosmological Principle 1986 (revised 1988), Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148. Not an easy read for a non-scientist but an important trailblazer book.
‡ David Deutsch. The Fabric of Reality -- The Science of Parallel Universes and Its Implications 1997, Viking,  ISBN978-0713990614
†† Davies, Paul. How to Build a Time Machine,2002, Penguin Books,
ISBN 0-14-100534-3
†‡ Davies, Paul. The Goldilocks Enigma, also entitled Cosmic Jackpot, 2007, Houghton Mifflin Harcourt, ISBN 0-14-102326-0.
‡‡Also see
  • Peter D. Ward and Donald Brownlee. Rare Earth, 2003. Copernicus.

  • Martin Rees. Just Six Numbers--The Deep Forces That Shape the Universe,2001, Basic Books.

  • Max Tegmark. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,2014, Knopf.

Saturday, March 6, 2021

Chapter 14
Plato and Cantor v. Wittgenstein and Brouwer

Prove all things. Hold fast to that which is good.
                 --I Thes 5:21

This essay was begun in January 2002. Correction added Aug. 13, 2004, to include an inadvertently omitted "undecidable of the third kind." As the bulk of this essay was written nearly two decades ago, it does not necessarily represent my current thinking.
Without going into an extensive examination of phenomenology and the psychology of learning, perception and cognition, let us consider the mind of a child.

Think of Mommy controlling a pile of lollipops and crayons, some of which are red. In this game, the child is encouraged to pick out the red objects and transfer them to 'his' pile.

The child employs a mental act of separation (some might call this 'intuition') to select out an item, in this case by direct awareness of the properties of redness and of ease of holding with his hands. This primal separation ability is necessary for the intuition of replication. Crayon and lollipop are 'the same' on account of redness. In turn, this intuition of replication, or iteration, requires a time sense, whereby if the child hears 'more' he associates the word with an expectation of a craving being satisfied ('more milk').

The child becomes able to associate name-numbers with iteration, such that 'one thing more for me' becomes 'one thing,' which in time is abstracted to 'one.' A sequence of pulses is not truly iterative, because there is no procedure for enumeration. The enumeration procedure is essentially a successor function, with names (integers) associated with each act of selection by replication intuition. Likewise, we must have amorphous 'piles' before we can have sets. As adults we know that the 'mine' pile and the 'Mommy' pile have specific, finite numbers of elements. But we cannot discern the logico-mathematical objects of set and element without first having a concept of counting.

That is, in the minds of small children and of adults of primitive cultures, integers are associated with intuitively replicable material objects, such as apples and oranges. But the names are so useful that it is possible to mentally drop the associated objects in a process of abstraction. That is, we might consider an integer to be quite similar in spirit to an 'operator.' Whatever objects are associated with operators, a common set of rules of manipulation applies to the operators alone.

Platonism vs. intuitionism
Cantor's acceptance of 'actual infinities' seems to me to require a platonic concept of ideals: forms or formalisms that count as existing a priori.

The intuitionists, led by Brouwer and partially supported by Wittgenstein (and Kronecker before them), would object to a set or, possibly, a number that 'cannot be constructed.' Related to this division is the dispute as to whether a theorem or mathematical form is discovered, as the platonists see it, or invented, as the intuitionists see it.

I don't intend to inspect every wrinkle of these controversies but rather to focus on the concept of existence of mathematical statements and forms.

Forthwith, let's dismiss the concept of 'potential infinity' that was in vogue in the 19th century as a means of describing a successor operation. 'Potential' invokes the thought of 'empowered to achieve an end.' To say that 'potential infinity' is conceptually acceptable is to say that 'actual infinity' is also permissible.

Let us accept the Zermelo-Fraenkel successor set axiom as underpinning proof by induction and as underpinning open-ended successor functions, such as the function f(n) = n+1, which describes the natural numbers. This axiom is often known as the infinity axiom, but we are not, without further thought, entitled to take that leap. The successor axiom says that a recursion function needn't have a specific stop order. We may visualize a computer that spits out a stream of discrete outputs nonstop. (An issue here is that in thinking of a nonstop successor algorithm, we assume units of time being 1:1 with N, the set of natural numbers. Alas, our mental picture is inadequate to overcome the interdependence of primitive concepts.)

So at this point the successor axiom permits us to build ever-larger finite entities but does not permit us to assume some 'limiting value' associated with a particular successor function. Yet such limits are highly desirable.

Let us consider irrational reals. In the case of square roots, a geometric form -- the hypotenuse of a right triangle -- can be measured by a ruler (we neglect the issue of accuracy of the ruler, an issue that also applies to rationals) in less than a minute. Most would agree that the distance expressed by a square root exists and can be plotted on a number line, justifying the naming of that distance by a symbol such as x0.5.

However, other irrationals, such as 20.2 can be only ever-better 'approximated' as a rational by some successor function, such as Newton's method or an 'infinite' series. Because the nested interval in which such an irrational is found grows smaller and smaller, we might through careless thinking suppose that we can justify some limiting distance from origin because we believe we are getting closer and closer to an interval of zero length. But our successor function requires eternity to exactly locate that point. So in human terms, such a distance is unmeasurable and might be said by some to be nonexistent. Still, the difference between two nonstop successor functions, unequal in every finite output, may still be held to grow ever smaller, helping to justify existence of such a point.

Now, should we regard this distance/number as a fait accompli or should we regard it as impossible to achieve?

Consider the circle. Is the circle an ideal thought form that axiomatically pre-exists geometry or is it an artifact of human ingenuity which in fact doesn't exist because a 'true circle' requires a nonstop algorithm -- perhaps the positing of a a set of n-gons of evermore facets? (Then of course the straight line, the point and the plane must be accepted a priori.)

Daniel J. Velleman (Philosophical Review, 1993) proposed that constructivism be 'liberalized' to accept countable infinities (but not uncountable ones) on the grounds that 'performing infinitely many computations is not logically impossible, but only medically impossible.'

Yet, an intuitionist or constructionist might disagree that such a performance is logically possible, but rather argue that the term 'countable infinity' is simply a phrase for describing extension by induction. That is,

If X is infinite and countable, then x ∈ X <--> x+1 ∈ X.

Still, we are implicitly assuming that time is already divided into a countably infinite number of unit 1 intervals. That is, we face a circularity problem.

Nevertheless, the inductive model of X does not require that X ever be complete. That is, we can write

∃n ∈ N ∀ m ∈ N ∀ t ∈ T ( f(tm)--> (x ∈ X <--> x+1 ∈ X))

We would say that T is an ideal in P-space and not a result of a performable algorithm.

It seems quite evident that the pure constructivist program falters on the issue of time.

The issue is interesting because the successor axiom brings us to the issue of paradoxes (or antinomies), in particular those of Russell and Cantor. Though such paradoxes may be ruled axiomatically out of order, such an approach leaves a mild sense of disquiet, though I fear we must, if pressed, always resort to axioms.

At any rate, the value of a fundamental contradiction is that it demonstrates that a system of rules of thought based on form alone is insufficient to express the 'stuff' of being. And, of course, such a contradiction may pose serious questions as to the usefulness of a theory; I have in mind Cantor's paradox to the effect that the cardinality of U, the set of all sets, is unstable.

Following the Brouwerian path, Wittgenstein, who disliked such self-referencing anomalies, tried to dispose of them through a philosphical appeal to constructionist ideas. Cantor's champion, Hilbert, tried to limit the use of 'ideals,' but was nevertheless pushed to defend the notion of infinite totalities, at least implicitly. Without infinite totalities, or actual infinities, Cantor's paradise would fall.

The dispute between, essentially, platonists and constructionists is not resolvable without further elucidation, and is unlikely ever to be fully resolvable.

I suggest introduction of two axiomatic, or, primitive, concepts: a realm of thought assigned a timelike property, which, for short, we might dub T-space, for time-controlled, or Turing, space; a realm of thought with the property of timelessness, which we might dub P-space, for Platonic space. These spaces, or realms, are not topologically definable.

Now we are in a position to say that rules of mathematical thought that exist in T-space do not exist in P-space. There are no set-theoretic rules or relations in P-space, because no timelike operations.

We are permitted to collect all P-space objects into a set, but P-space itself is axiomatically not a mathematical set. The set of P-space objects is however an ideal and a resident of P-space.

Now, a P-space ideal may be exported to T-space and used in operations. Even so, if a P-space ideal is related to a T-space recursion function, the recursive's successor rule may not be applied to the ideal (no self-referencing permitted).

For example, a limiting value -- no finite nth step of an algorithm can go above it, or below it -- is permitted to exist in P-space. Likewise, the 'construction' of a circle occurs in T-space. The limiting form, a pure circle, is assigned to P-space.

At this juncture, it is necessary to point out that some constructions are purely logical, while others require repetitive computation. A recursion function, such as an algorithm to obtain pi, cannot yield an output value without the previous output value as an input value. That is, computation follows Fno F(n-1) o F(n-2) ... F0, where F0 is the initial step of a composite function. Here construction occurs by 'building' one brick at a time. In the case of, for example, [lim n--> inf.] n/(n+1), a recursive computation does not occur. However, an inductive logical operation does occur. That is, we mean that n/(n+1) < (n+1)/(n+2) < 1 for any finite n. Does such a logical relation imply 'construction'? We may say that it can be thought of as a secondary form of construction, since values of n are constructed by f(n) = n + 1.

Still, whether we have direct recursive construction or only indirect recursion, we place such mathematical operations in T-space. Because T-space is considered to be timelike, we avoid the issue of which comes first, the algorithm or the ideal.

Relations between T-space and P-space
An actual infinity can be defined as nondenumerable if it cannot be produced by a nonstop n-step algorithm. The algorithm's logical form is inductive: If property p holds for step n, then property p holds for step n+1. In the case of a denumerable infinity, it is always possible to relate this platonic-space ideal to a Turing-space induction condition. However, simple induction of course is insufficient to justify a nondenumerable infinity. Cantor's diagonal proof of the nondenumerability of the reals uses the contradiction of the possibility of simple induction. Here we have a situation where the set of irrationals exists in P-space but the rule of inference in T-space is not induction alone.

So at this point we assert that if a relation between a P-space ideal and a T-space procedure cannot be justified to the satisfaction of the mathematical community, then we would say that the ideal is not recognized as a mathematical object, even if it be in some way numerical. For example, an infinite digit string which is a priori random would seem to have no direct relation to a T-space procedure, though perhaps an indirect relation might be found. However, if we set up a no-halt order procedure for pseudorandomly assigning at step n a digit to the nth digit space, the P-space ideal of an infinite pseudorandomly digit string would be held to exist as a mathematical object in P-space, being justified by an inductive claim: no matter how great n, there is no step at which the entire digit string inclusive of the nth digit can be known in advance.

In the case of a strict induction model for a geometric ideal, such as a curve, we can partly justify analytic methods here but the issue of the real continuum must also be addressed (see below). That is, we can say that if an n-gon with all facet endpoints equidistant from a centerpoint can be drawn, then a like (n+1)-gon can also be drawn. We relate this induction model to the ideal of a true circle by saying that 0 is the 'limiting value' of n-gon facet length.

Likewise, we can authorize the 'area under a curve' using a numerical 'approximation' induction model.

A significant analytic issue here is raised by the induction model of obtaining arc length as a sum of approximated line segments. As n is increased, the difference in facet length decreases, so that at the limit of 0 length, all points are of equal length. Yet, each point on the arc is 1:1 with a point on the axis, which are also of 0 length. Yet the infinite sum of the points of the arc may be unequal to the infinite sum of the points on the axis interval. Does this mean arc zeroes are unequal to interval zeroes? Anyway, isn't 0 x infinity equal to 0?

We can always write off such a puzzlement as 'counterintuitive' and leave it at that. But I think it might help to say that the ideal associated with the T-space arc formula A is not identical with the ideal of the T-space arc formula B. We cannot in this case 'compare zeroes.' But we can say that ideal A is a quantum-like ideal where the zero is related to a sub-ideal which we call an infinitesimal quantity. And infinitesimal quantities may be unequal.

Perhaps you accept that the epsilon-delta proofs of analysis have killed off the dread infinitesimal. By that you mean that the induction method obtains a numerical limit but that pure geometric forms are not in fact mathematical objects. The number exists but the curve is not 'constructible.' The 'actual infinity' that makes 'all' points of a curve 1:1 with 'all' points of a line does not exist in this scenario.

Yet if ideals are sometimes necessary in mathematics, why arbitrarily rule out a particular ideal, such as an infinitesimal?

I do not wish to assert that fundamental issues are now, voila!, all resolved. I simply say that the concepts of T-space and P-space may make us more comfortable from the standpoint of consistency. Yet, more reflection is needed as to what these concepts mean with respect to Goedel's incompleteness theorems.

Goedel's incompleteness theorems say that for a consistent formal system F based, for example, on Peano arithmetic, there is always a true statement P that cannot be proved in F. If we extend F (call it F1) by adding P as a nonlogical axiom of F, then there is a statement PF that is true but not provable in F1.

We can define a set of systems such that Fn+1 is the formal system obtained by adding PFn as a nonlogical axiom to Fn.

So we have a T-space construction routine, or recursion algorithm, for compiling formal systems such that Fn --> Fn+1 --> PFn+1 is true but not provable in Fn+1.

If we define a P-space ideal lim n--> inf. Fn, we see that Goedel's result does not apply, since constructive activity is not permitted in P space, in which case the Goedel sentence PFn is not defined.

On a more fundamental level, we may wish to address the issue of belief that a theorem is true, based on our particular algebra, as against the theorem being a priori true, regardless of what one believes. Consider what Wittgenstein saw as 'Moore's paradox,' which he obtained by coupling the statements 'There is a fire in the room' and 'I believe there is no fire in the room.' If statement A is a priori true, then, according to some, we would face a fundamental paradox.

You respond perhaps that one does not say 'There is a fire in the room' without either believing or not believing the assertion, whether or not there is an a priori truth to support it. That is, the truth value of a 'fact' is meaningless without a mind to review it. A cognitive act is not precluded by the notion that 'experience tells one' that previous sensory impressions (beliefs) about fire leads one to anticipate (believe) that one's current sensory impression about fire is valid.

So then, does a mathematical ideal require belief (perhaps justified by T-space inference) in order to exist? We come down to the definition of 'exist.' Certainly such an ideal cannot be apprehended without cognition. If, by cognition, we require a sense of time, then we would say that ideals are 'pointed to' from T-space thought patterns but also might exist independently of human minds in P-space, though of course the realms of mentation designated platonic space and turing space presuppose existence of some mind.

Of course, we must beware considering T-space to be a domain and P-space a range. These spaces are a priori mental conditions that cannot be strictly defined as sets or as topological objects.

Coping with paradoxes
Consider Cantor's paradox. The definition of power set permits us to compute the quantity of all elements of a finite power set. In every case, there are 2n elements. But does an actual infinity, U, the set of all sets, exist? Since U is a set, shouldn't it have a corresponding actually infinite power set? What of the contradiction (with U' meaning power set) expressed:
U ⊂  U' ⊂  U'' ⊂  U''' ...?
Our response is that U, as a P-space ideal, may not have its 'shadow' generation rule applied to it. We can also accept U', as a P-space ideal, which also cannot have its shadow generation rule applied to it. Though we might export U or U' to T-space for some logico-mathematical operation, we cannot do the operation U ⊂ U', which requires application of set-building rules on U, a banned form of self-referencing.

In general, we prohibit a successor rule from being applied non-vacuously to an actually infinite ideal. For example, [lim n->inf.](n) + 1 = n is simply the vacuous application of a successor rule.

Similarly for Russell's paradox: R, the set of 'all' (here assuming 'all' signifies an infinitude) sets that contain themselves as members, and S, which is R's complement, exist as P-space ideals. If exported to T-space a successor rule cannot be applied. So the question of whether R ∈ R is prohibited. However the T-space operation R ∪ S = U is permitted.

An infinite (or open-ended) set 'generated' by a successor rule requires a concept of time, which includes the concept of 'rate of change' (even if the rate is an unobtrusive 1). If we talk about a completed denumerably infinite set, we are saying that 0 is the limiting value of the generation algorithm's rate of change.

Let A be a finite set and P(A) be the power set of A. We now specify

P(A)-->P(P(A)), which we may express P[0](A) --> P[1](A).

So, in general, we have P[n](A).

Now to indicate the power set of the set of all power sets, we write

lim n--> ∑ P[n](A) The usual way to dispose of this paradox is to say that though a collection is an extension of the set concept, a collection is not necessarily a set. Hence the collection of all sets would not itself be a set -- a theorem stemming from the ZF axioms.

However, here we address the paradox by saying that, if the denumerably infinite set is construed as completed, then the generation algorithm's rate of change is 0, as in

lim n--> ∑P[n+1](A) = lim n--> ∑P[n](A).

In other words, lim n--> ∑P[n](A) exists in P-space and a 'self-referencing' T-space algorithm is prohibited.

The principle of the excluded middle
The principle of the excluded middle -- which is often read to mean a logico-mathematical statement is either true or false, with no third possibility -- was strongly challenged by Brouwer, who argued that the principle is unreliable for infinities. Our rule of prohibiting 'self-referencing' operations on ideals helps address that concern. The reliability of the principle of the excluded middle is a concern in, for example, the Goldbach conjecture.

Let us define the Goldbach conjecture inductively as

i) Q = ((P(2x) --> P(2(x+1)))

A disproof requires

ii) ~Q = ~((P(2x) --> P(2(x+1)))

There is also the possibility that neither i) nor ii) is decidable [using '+' for the exclusive 'or']:

iii) ~(Q + ~Q)

Here we see a point where platonists and intuitionists clash. The platonists, rejecting iii) as a way of writing 'Q is undecidable' would claim that merely because we cannot know whether Q or ~Q is true does not mean that it is false that either has a truth value. The intuitionists would argue that Q's alleged truth value is of no mathematical interest.

If we accept iii), we must require that De Morgan's law not apply to the exclusive 'or,' even though truth tables for ~P v Q and ~P + Q are identical.

De Morgan's law transforms iii) into ~Q & Q, which is false by contradiction.

However,
P v Q = ((P & Q) + (P + Q))

But if P = ~Q, we would have

~Q v Q = ((~Q & Q) + (~Q + Q))
Yet the contradiction ~Q & Q is disallowed.

A similar philosophical perplexity arises from the question of whether Euler's constant is rational or irrational. The constant g is considered to be a number on the basis of at least one induction model. To wit:

lim --> ∑1/n - Ln 1/n = γ

where γ is an ideal constant.

It is quite plausible that gamma's rationality is undecidable, that there is insufficient data to determine rationality. So the statement 'gamma is rational' may not have a knowable truth value. Does it have an a priori truth value? Many mathematicians would assert that if a truth value is unknowable, the issue of a priori truth value is irrelevant.

Still, undecidability is most satisfactory if proved. Our position would be that, in the case of gamma, the irrationality conjecture would be proved undecidable if rationality could never be decided without application of a successor rule on gamma.

In the case of the continuum hypothesis, we see a case where a logico-mathematical statement has a 0 truth value, validating the warning against unrestricted use of the principle of the excluded middle. Goedel and Cantor have collectively shown that Cantorian and ZF set theory contain insufficient information for a yes or no answer to the conjecture, which says that there is no Cantorian cardinal number between cardN (or Aleph_null, if you like) and cardP(N) (another Aleph).

The implicit flaw in the continuum conjecture is the expectation that the conjecture is either true or false. If you draw a playing card face down from a well-shuffled deck an do not turn it over, the proposition 'the card is a face card' is either true or false -- even if you do not examine the obverse before shuffling the card back into the deck. Though the truth value remains forever undecidable, it is presumed to have an a priori truth value. I call such a proposition an undecidable statement of the first kind.

The continuum conjecture is then an undecidable statement of the second kind -- undecidable because the statement has no truth value in some logic system, whether that system be sharply or fuzzily defined.

[Thanks to Paul Kassebaum for drawing my attention to a difficulty with my categorization of undecidables. It seems that I inadvertently omitted the category of undecidable statements of the third kind, which would cover questions that are notionally answerable but which are computationally too difficult. For example, it is computationally imposssible to even name most numbers, let alone compute with them. However, Paul had a good point in noting that computational difficulty seems to fit my "first kind" category, in that, from a Platonist perspective, both categories have a priori answers that are inaccessible.

In addition, a "fourth kind" seems in order: the obvious one stemming from Goedel's incompleteness theorems: a sufficiently rich complete system contains at least one undecidable statement.]

There is of course the issue of the provability of the assertion that the playing card is either a face card or not; taking a cue from the Copenhagen interpretation of quantum mechanics, we cannot be sure that the two realities are not combined into a superposed state, with neither reality in existence until an observation is made. Though such an interpretation is normally applied to the nanometer world, the thought experiment about Schroedinger's cat shows that quantum weirdness can be scaled up to the macro-world. We cannot be sure that 'reality' does not work that way. (See 'The resurrection of Schrodinger's cat' at the link above.)

I have been unable to think of a logico-mathematical statement that is an undecidable of the first kind and I conjecture that such a statement cannot be proposed.

Of course, propositions of the second kind are common in mathematics, as in: 'The nth integer is prime.'

Goedel and Cohen have proved the continuum hypothesis to be such a 'meaningless' statement; similarly our scheme makes the paradoxes of Russell and Cantor equivalent to an undecidable of the second kind.

In our model, we would say that cardN and cardP(N) are P-space ideals but that a cardX such that cardN less than cardX less than cardP(N) is not a mathematical object in P-space because no inference rule exists relating a T-space procedure to a P-space ideal.

In a 1930 paper, Heyting (appearing in 'From Brouwer to Hilbert,' compiled by Paolo Mancosu, Oxford, 1998), says the intuitionists replace the concept 'p is false' with 'p implies a contradiction.' So then, ~p is a 'new proposition expressing the expectation of being able to reduce p to a contradiction' and '|- ~p will mean "it is known how to reduce ~p to a contradiction".' Hence comes the possibility that neither |- p nor |- ~p is decidable.

Heyting notes that |- ~~p means 'one knows how to reduce to a contradiction the supposition that p implies a contradiction,' and that |- ~~p can occur without |- ~p 'being fulfilled,' thus voiding double-negation and the principle of the excluded middle.

Through tables of such inferences, Heyting derives the logico-mathematical inference states of proved, contradictory, unsolvable, unprovable, not unprovable, not contradictory and not decided.

On the continuum
The obvious way to define the reals is to posit 'any' infinite digit string and couple it to every integer. Of course, Cantor's diagonal argument proves by contradiction that the reals cannot be enumerated. Since the rationals can be counted, it is the irrationals that cannot be.

It is curious that the if f is a function that yields a unique real, the family of such functions, Uf, is considered denumerable. That is, we might try to list such writable functions by i e I. We could then write an antidiagonal function g = f_i(i) + 1. But logicians disdain this type of paradox by requiring that f be written in a language L that imposes a finite set of operations on a finite set of symbols. It is found that the function g cannot be written without resort to an extended language L'.

(It is however possible to establish a nondenumerable subset of irrationals that is 1:1 with a subset of writable functions f if we permit the extended language L'. See 'Disjoint nondenumerable sets of irrationals' above.) So then we find that Cantor's diagonal proof reduces to an existence theorem for a subset of reals undefinable in some language L. To put it another way, such a real is 'unreachable.' We cannot order such an r by the inequality p/q < r < s/t (with p,q,s,t e Z) because we cannot ascertain p/q or s/t.

In my view, such unreachable reals are ideals. They exist in relation to the ideal of the totality of reals. But their shadows do not exist in T-space. So they have low relevance to mathematics. In fact, we cannot even say that the subset of such reals contains individual elements, raising a question as to whether such a subset exists. Again, the subset is a P-space ideal, and the T-space method of defining this subset by individual elements does not apply.

So then we have that cardR (whatever aleph that is after aleph-null) becomes a shorthand way of saying that there exists a set of irrationals whose elements are undefinable in L.

The concept of nondenumerability is useful in that it conveys that the set of irrationals is much 'denser' than the set of rationals. This gives rise to the thought of ordering of infinities with transfinite numbers. But 'X is nondenumerable' means X is ~cardN. The continuum conjecture could be expressed cardN < cardM < cardR. But it can be better expressed cardN < cardM < ~cardN. This last expression succinctly illustrates the result of the labors of Goedel and Cohen: that the conjecture is 'independent' of set theoretic logic.

Arithmetic recursives
Consider

åio i

We might call this a paradigm iterative function, where the domain and range intersect except at possibly initial and final (or limit) values. Such a function is considered 'non-chaotic' because the sequence is considered informative. That is the naming routine of å i is the same as the naming routine for i. The digit-place system is considered informative because we can order a number y less than x less than z in accord with this naming routine. That is, we know that 52 > 5 because of the rules of the digit-place system.

As shown below, a recursive function may be construed as non-chaotic if the recursive f is writable as some simple well-known function, such as n or n!

An arithmetic recursive can be written:

f(n) = h(n)f(n-1) + g(n)

We note that h(n) and-or g(n) can also be recursive, for a composite of the type: h(n) =k(n)h(n-1) + l(n)

and that it is also possible to have such expressions as

f(n) = h(n)f(n-1) + f(n-2)

where f(n-2) kicks in after a kth step of f(n).

A few recursive sequences:
h(n) = n, g(n) = 0
f(0) = 1, f(n) = n!
In general, if g(n) = 0, then f(n) = Õh(n).
h(n) = (n+1)/n, g(n)=n
f(0) =1, f(n) = 4, 9, 16 = n2; f(0) = 0, f(n) = 3/2, 3, 19/4...

h(n) = 1/n, g(n) = 1
f(0) = 1, f(n) = 1, 3/2, 11/8...

h(n) = 1/n, g(n) = n
f(0) = 1, f(n) = 2, 3, 4... = n

h(n) = 1/n, g(n) =1/ n
f(0) = 1, f(n) = 2, 5/2, 7/6, 13/24...

h(n) = 1, g(n) = 1/n
f(0) = 0, f(n) = 1, 3/2, 5/6 ...

f(0) = 1, f(n) = 1, 2, 5/2...

h(n) = n, g(n) = n
f(0) = 1, f(n) = 2, 6, 21...

h(n) = -n, g(n) = -n
f(0) = 0, f(n) = 0, -3, 8, -45 ...

h(n) = -n, g(n) = n

f(0) = 0, f(n) = 1, 0, 3, -8, 45...[recheck]

h(n) = -n, g(n) = -1
f(0) = 0, f(n) = -1, 1, -4, 15, -66...

if g(n) = 1, we get f(n) = 1, -1, 4, -15, 66...

Basic manipulations of such recursives:
Let f_k express h(n)f(n-1) + g(n) where f(0) = k and k is any real initial value.

If g(n) = 0, then f_k/f_j = (Õ h(n)k)/(Õ h(n)j) = k/j.

f_k - f_j = Õh(n)k - Õ h(n)j = Õ h(n)(k-j)

This last expression yields a function

m(k-j) = h(n)f(n-1) + 0g(n)

Denoting the discrete first derivative of f as f', we have, if h = 1:

f'(n) = f(n) - f(n-1) = g(n).

If h does not equal 1 at all values, we can write f' as an inequality, as in

h'(n) £ f'(n) £ g'(n), or g'(n) £ f'(n) £ h'(n)

Example:

f(n) = (n2 + (1)f(n-1) + n3
g'(n) is polynomial but Õ k(n^2 + 1) changes exponentially. Hence after some finite value of n, we have, assuming k is a positive integer:

3n2 < f'(n) < [Õ k(n2 + 1) - Õ k((n-1)2 + 1)]

Example:

f(n) = (n2 + 1)f(n-1) + Õ (n3 + 1)

Because g(n)'s exponential rate of change is higher than f(n)'s, we have, after some finite n:

h'(n) < f'(n) < g'(n)

Such inequalities are found in recursive ratios fa/fb, where fa =/= fb

For example, the irrational z(-2) can be written:

fa(n)/fb(n) = [n2fa(n-1) - (n-1)2!]/n2!

At step n+1, the ratio has a numerator that contains an integer greater than the numerator integer for step n; likewise for the denominator.

We see that ever-greater integers are required. Assuming the limit of fa/fb is constant, when this ratio at step n is converted into a decimal string, the string lengthens either periodically or aperiodically.

So we might say that [lim n-> inf]fa is an 'infinite integer' -- or a pseudo-integer. An irrational can be then described as a ratio whereby two pseudo-integers are relatively prime. Or, we might say that there exists a proof that if the ratio is relatively prime at step n, it must be relatively prime at step n+1.

Obviously [?], the set of pseudo-integers is 1:1 with the reals, a pseudo-integer being an infinite digit string sans decimal point.

If we require that a real be defined by a writable function based on the induction requirement, then the set of reals is countable. But if we divorce the function from the general induction requirement, then the set of reals is nondenumerable, as discussed here.
Do your best to present yourself to God as one approved, a workman who has no need to be ashamed, rightly handling the word of truth.

--2 Tim 2:15

Saturday, February 27, 2021

Chapter 13
In search of a blind watchmaker

A discussion of
The Blind Watchmaker:
Why the Evidence of Evolution Reveals a Universe without Design
by the evolutionary biologist Richard Dawkins.
First posted October 2010. Reposted  in July 2017,
with several paragraphs deleted and other, minor, changes.

Surely it is quite unfair to review a popular science book published years ago. Writers are wont to have their views evolve over time [1]. Yet in the case of Richard Dawkins's The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986), a discussion of the mathematical concepts seems warranted, because books by this eminent biologist have been so influential and the "blind watchmaker" paradigm is accepted by a great many people, including a number of scientists.

Dawkins's continuing importance can be gauged by the fact that his most recent book, The God Delusion (Houghton Mifflin 2006), was a best seller. In fact, Watchmaker, also a best seller, was re-issued in 2006.

I do not wish to disparage anyone's religious or irreligious beliefs, but I do think it important to point out that non-mathematical readers should beware the idea that Dawkins has made a strong case that the "evidence of evolution reveals a universe without design."

There is little doubt that some of Dawkins's conjectures and ideas in Watchmaker are quite reasonable. However, many readers are likely to think that he has made a mathematical case that justifies the theory(ies) of evolution, in particular the "modern synthesis" that combines the concepts of passive natural selection and genetic mutation.

Dawkins wrote his apologia back in the eighties when computers were becoming more powerful and accessible, and when PCs were beginning to capture the public fancy. So it is understandable that, in this period of burgeoning interest in computer-driven chaos, fractals and cellular automata, he might have been quite enthusiastic about his algorithmic discoveries.

However, interesting computer programs may not be quite as enlightening as at first they seem.

Cumulative selection
Let us take Dawkins's argument about "cumulative selection," in which he uses computer programs as analogs of evolution. In the case of the phrase, "METHINKS IT IS LIKE A WEASEL," the probability -- using 26 capital letters and a space -- of coming up with such a sequence randomly is 27-28 (the astonishingly remote 8.3 x 10-41). However, that is also the probability for any random string of that length, he notes, and we might add that for most probability distributions. when n is large, any distinct probability approaches 0.

Such a string would be fantastically unlikely to occur in "single step evolution," he writes. Instead, Dawkins employs cumulative selection, which begins with a random 28-character string and then "breeds from" this phrase. "It duplicates it repeatedly, but with a certain chance of random error -- 'mutation' -- in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.

Three experiments evolved the precise sentence in 43, 64 and 41 steps, he wrote.

Dawkins's basic point is that an extraordinarily unlikely string is not so unlikely via "cumulative selection."

Once he has the readers' attention, he concedes that his views of how natural selection works preclude use of a long-range target. Such a target would fulfill the dread "final cause" of Aristotle, which implies purpose. But then Dawkins has his nifty "biomorph" computer visualizations (to be discussed below).

Yet it should be obvious that Dawkins's "methinks" argument applies specifically to evolution once the mechanisms of evolution are at hand. So the fact that he has been able to design a program which behaves like a neural network really doesn't say much about anything. He has achieved a proof of principle that was not all that interesting, although I suppose it would answer a strict creationist, which was perhaps his basic aim.

But which types of string are closer to the mean? Which ones occur most often? If we were to subdivide chemical constructs into various sets, the most complex ones -- which as far as we know are lifeforms -- would be farthest from the mean. (Dawkins, in his desire to appeal to the lay reader, avoids statistics theory other than by supplying an occasional quote from R.A. Fisher.)[2]

Dawkins then goes on to talk about his "biomorph" program, in which his algorithm recursively alters the pixel set, aided by his occasional selecting out of unwanted forms. He found that some algorithms eventually evolved insect-like forms, and thought this a better analogy to evolution, there having been no long-term goal. However, the fact that "visually interesting" forms show up with certain algorithms again says little. In fact, the remoteness of the probability of insect-like forms evolving was disclosed when he spent much labor trying to repeat the experiment because he had lost the exact initial conditions and parameters for his algorithm. (And, as a matter of fact, he had become an intelligent designer with a goal of finding a particular set of results.)

Again, what Dawkins has really done is use a computer to give his claims some razzle dazzle. But on inspection, the math is not terribly significant.

It is evident, however, that he hoped to counter Fred Hoyle's point that the probability of life organizing itself spontaneously was equivalent to a tornado blowing through a junkyard and assembling from the scraps a fully functioning 747 jetliner, Hoyle having made this point not only with respect to the origin of life, but also with respect to evolution by natural selection.

So before discussing the origin issue, let us turn to the modern synthesis.

The modern synthesis
I have not read the work of R.A. Fisher and others who established the modern synthesis merging natural selection with genetic mutation, and so my comments should be read in this light. [Since this was written I have examined the work of Fisher and of a number of statisticians and biologists, and I have read carefully a modern genetics text.]

Dawkins argues that, although most mutations are either neutral or harmful, there are enough progeny per generation to ensure that an adaptive mutation proliferates. And it is certainly true that, if we look at artificial selection -- as with dog breeding -- a desirable trait can proliferate in very short time periods, and there is no particular reason to doubt that if a population of dogs remained isolated on some island for tens of thousands of years that it would diverge into a new species, distinct from the many wolf sub-species.

But Dawkins is of the opinion that neutral mutations that persist because they do no harm are likely to be responsible for increased complexity. After all, relatively simple lifeforms are enormously successful at persisting.

And, as Stephen Wolfram points out (A New Kind of Science, Wolfram Media 2006), any realistic population size at a particular generation is extremely unlikely to produce a useful mutation because the ratio of possible mutations to the number of useful ones is some very low number. So Wolfram also believes neutral mutations drive complexity.

We have here two issues:
1. If complexity is indeed a result of neutral mutations alone, increases in complexity aren't driven by selection and don't tend to proliferate.
2. Why is any species at all extant? It is generally assumed that natural selection winnows out the lucky few, but does this idea suffice for passive filtering?

Though Dawkins is correct when he says that a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation), we must consider the entire chain of mutations represented by a species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields an overall probability of 1.65 x 10-19.

In other words, the more mutations and ancestral species attributed to an extant species, the less likely it is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Dawkins's algorithm demonstrating cumulative evolution fails to account for this difficulty. Though he realizes a better computer program would have modeled lifeform competition and adaptation to environmental factors, Dawkins says such a feat was beyond his capacities. However, had he programed in low probabilities for "positive mutations," cumulative evolution would have been very hard to demonstrate.

Our second problem is what led Hoyle to revive the panspermia conjecture, in which life and proto-lifeforms are thought to travel through space and spark earth's biosphere. His thinking was that spaceborne lifeforms rain down through the atmosphere and give new jolts to the degrading information structures of earth life. (The panspermia notion has received much serious attention in recent years, though Hoyle's conjectures remain outside the mainstream.)

From what I can gather, one of Dawkins's aims was to counter Hoyle's sharp criticisms. But Dawkins's vigorous defense of passive natural selection does not seem to square with the probabilities, a point made decades previously by J.B.S. Haldane.

Without entering into the intelligent design argument, we can suggest that the implausible probabilities might be addressed by a neo-Lamarkian mechanism of negative feedback adaptations. Perhaps a stress signal on a particular organ is received by a parent and the signal transmitted to the next generation. But the offspring's genes are only acted upon if the other parent transmits the signal. In other words, the offspring embryo would not strengthen an organ unless a particular stress signal reached a threshold.

If that be so, passive natural selection would still play a role, particularly with respect to body parts that lose their role as essential for survival.

Dawkins said Lamarkianism had been roundly disproved, but since the time he wrote the book, molecular biology has shown the possibility of reversal of genetic information (retroviruses and reverse transcription). However, my real point here is not about Lamarkianism but about Dawkins's misleading mathematics and reasoning.

Joshua Mitteldorf, an evolutionary biologist with a physics background and a Dawkins critic, points out that an idea proposed more than 30 years ago by David Layzer is just recently beginning to gain ground as a response to probability issues. Roughly I would style Layzer's proposal a form of neo-Lamarckianism [3].

Dawkins concedes that the primeval cell presents a difficult problem, the problem of the arch. If one is building an arch, one cannot build it incrementally stone by stone because at some point, a keystone must be inserted and this requires that the proto-arch be supported until the keystone is inserted. The complete arch cannot evolve incrementally. This of course is the essential point made by the few scientists who support intelligent design.

Dawkins essentially has no answer. He says that a previous lifeform, possibly silicon-based, could have acted as "scaffolding" for current lifeforms, the scaffolding having since vanished. Clearly, this simply pushes the problem back. Is he saying that the problem of the arch wouldn't apply to the previous incarnation of "life" (or something lifelike)?

Some might argue that there is a possible answer in the concept of phase shift, in which, at a threshold energy, a disorderly system suddenly becomes more orderly. However, this idea is left unaddressed in Watchmaker. I would suggest that we would need a sequence of phase shifts that would have a very low overall probability, though I hasten to add that I have insufficient data for a well-informed assessment.

Cosmic probabilities
Is the probability of life in the cosmos very high, as some think? Dawkins argues that it can't be all that high, at least for intelligent life, otherwise we would have picked up signals. I'm not sure this is valid reasoning, but I do accept his notion that if there are a billion life-prone planets in the cosmos and the probability of life emerging is a billion to one, then it is virtually certain to have originated somewhere in the cosmos.

Though Dawkins seems to have not accounted for the fact that much of the cosmos is forever beyond the range of any possible detection as well as the fact that time gets to be a tricky issue on cosmic scales, let us, for the sake of argument, grant that the population of planets extends to any time and anywhere, meaning it is possible life came and went elsewhere or hasn't arisen yet, but will, elsewhere.

Such a situation might answer the point made by Peter Ward and Donald Brownlee in Rare Earth: Why Complex Life Is Uncommon in the Universe (Springer 2000) that the geophysics undergirding the biosphere represents a highly complex system (and the authors make efforts to quantify the level of complexity), meaning that the probability of another such system is extremely remote. (Though the book was written before numerous discoveries concerning extrasolar planets, thus far their essential point has not been disproved. And the possibility of non-carbon-based life is not terribly likely in that carbon valences permit high levels of complexity in their compounds.)

Now some may respond that it seems terrifically implausible that our planet just happens to be the one where the, say, one-in-a-billion event occurred. However, the fact that we are here to ask the question is perhaps sufficient answer to that worry. If it had to happen somewhere, here is as good a place as any. A more serious concern is the probability that intelligent life arises in the cosmos.

The formation of multicellular organisms is perhaps the essential "phase shift" required, in that central processors are needed to organize their activities. But what is the probability of this level of complexity? Obviously, in our case, the probability is one, but, otherwise, the numbers are unavailable, mostly because of the lack of a mathematically precise definition of "level of complexity" as applied to lifeforms.

Nevertheless, probabilities tend to point in the direction of cosmically absurd: there aren't anywhere near enough atoms -- let alone planets -- to make such probabilities workable. Supposing complexity to result from neutral mutations, probability of multicellular life would be far, far lower than for unicellular forms whose speciation is driven by natural selection. Also, what is the survival advantage of self-awareness, which most would consider an essential component of human-like intelligence?

Hoyle's most recent idea was that probabilities were increased by proto-life in comets that eventually reached earth. But, despite enormous efforts to resolve the arch problem (or the "jumbo jet problem"), in my estimate he did not do so.

Interestingly, Dawkins argues that people are attracted to the idea of intelligent design because modern engineers continually improve machinery designs, giving a seemingly striking analogy to evolution. Something that he doesn't seem to really appreciate is that every lifeform may be characterized as a negative-feedback controlled machine, which converts energy into work and obeys the second law of thermodynamics. That's quite an arch!

The problem of sentience
Watchmaker does not examine the issue of emergence of human intelligence, other than as a matter of level of complexity.

Hoyle noted in The Intelligent Universe (Holt, Rhinehart and Winston 1984) that over a century ago, Alfred Russel Wallace was perplexed by the observation that "the outstanding talents of man... simply cannot be explained in terms of natural selection."

Hoyle quotes the Japanese biologist S. Ohno:
Did the genome (genetic material) of our cave-dwelling predecessors contain a set or sets of genes which enable modern man to compose music of infinite complexity and write novels with profound meaning? One is compelled to give an affirmative answer...It looks as though the early Homo was already provided with the intellectual potential which was in great excess of what was needed to cope with the environment of his time.

Hoyle proposes in Intelligent that viruses are responsible for evolution, accounting for mounting complexity over time. However, this seems hard to square with the point just made that such complexity doesn't seem to occur as a result of passive natural winnowing and so there would be no selective "force" favoring its proliferation.

At any rate, I suppose that we may assume that Dawkins in Watchmaker saw the complexity inherent in human intelligence as most likely to be a consequence of neutral mutations.

An issue not addressed by Dawkins (or Hoyle for that matter) is the question of self-awareness. Usually the mechanists see self-awareness as an epiphenomenon of a highly complex program (a notion Roger Penrose struggled to come to terms with in The Emperor's New Mind (Oxford 1986) and Shadows of the Mind (Oxford 1994).)

But let us think of robots. Isn't it possible in principle to design robots that multiply replications and maintain homeostasis until they replicate? Isn't it possible in principle to build in programs meant to increase probability of successful replication as environmental factors shift?

In fact, isn't it possible in principle to design a robot that emulates human behaviors quite well? (Certain babysitter robots are even now posing ethics concerns as to an infant's bonding with them.)

I don't suggest that some biologists haven't proposed interesting ideas for answering such questions. My point is that Watchmaker omits much, making the computer razzle dazzle that much more irrelevant.

Conclusion
In his autobiographical What Mad Pursuit (Basic Books 1988) written when he was about 70, Nobelist Francis Crick expresses enthusiasm for Dawkins's argument against intelligent design, citing with admiration the "methinks" program. Crick, who trained as a physicist and was also a panspermia advocate, doesn't seem to have noticed the difference in issues here. If we are talking about an analog of the origin of life (one-step arrival at the "methinks" sentence), then we must go with a distinct probability of 8.3 x 10-41. If we are talking about an analog of some evolutionary algorithm, then we can be convinced that complex results can occur with application of simple iterative rules (though, again, the probabilities don't favor passive natural selection).

One can only suppose that Crick, so anxious to uphold his lifelong vision of atheism, leaped on Dawkins's argument without sufficient criticality. On the other hand, one must accept that there is a possibility his analytic powers had waned.

At any rate, it seems fair to say that the theory of evolution is far from being a clear-cut theory, in the manner of Einstein's theory of relativity. There are a number of difficulties and a great deal of disagreement as to how the evolutionary process works. This doesn't mean there is no such process, but it does mean one should listen to mechanists like Dawkins with care.


1. In a 1996 introduction to Watchmaker, Dawkins wrote that "I can find no major thesis in these chapters that I would withdraw, nothing to justify the catharsis of a good recant."

2. In previous drafts, I permitted myself to get bogged down in irrelevant, and obscure, probability discussions. Plainly, I like a challenge; yet it's all too true that a writer who is his own sole editor has a fool for a client.

3. Genetic Variation and Progressive Evolution by David Layzer, The American Naturalist Vol. 115, No. 6 (Jun., 1980), pp. 809-826 (article consists of 18 pages) Published by: The University of Chicago Press for The American Society of Naturalists

Chapter 16
Brahman as Unknown God


What follows is a discussion of some material found in A Short History of Philosophy by Robert C. Solomon and Kathleen M. Higgins (Oxford 1996).
Though their book is somewhat uneven, the interludes of analytical brilliance make a read-through worthwhile. I was intrigued by what seems to be a succinct summary of the essence of much of Indian philosophy, which prompted some thought.
This chapter also appears in another of Conant's e-books, Dharma Crumbs.

As paralleled in Heraclitus and some of the other pre-Socratic philosophers, in Vedanta, Brahman is the ground, the value, and the essence of everything. This ultimate unity is therefore a coincidence of opposites – hot and cold, dry and wet, consciousness and world – which is incomprehensible to us. Brahman is "beyond all names and forms," and, like Yahweh, Brahman is a name for the unnameable, a reference to what cannot be understood or analyzed. Brahman is always "not this, not that."

But, we are assured Brahman, can be experienced, in meditation and mysticism, with Brahman being ultimately identical to one's true self or atman. It is thus the awareness of Brahman, most importantly, that is every person's supreme personal good. One of the obstacles to this good, especially among the learned, is the illusion of understanding.

The apostle Paul would very likely have identified Brahman as the "unknown God" – the utterly mysterious mind behind and within all existence.

Yet, he would say, we cannot connect to this mind without the intermediary Jesus, the Savior. The Unknown God decided to reveal his great love of humanity by this means. That mind is far beyond our rational capacities, whether the mind is called Brahman or Yahweh.
Yahweh (=Jehovah),
which means,
He is,
hence suggesting,
I am.
Thence
Jesus (=Joshua=Yeshua),
which means,
I am salvation
or
I save.

Jesus is the human face of the Unknown God, or Brahman. As the Son, Jesus is the projection of God into the world of humans.

Ultimately, Brahman is in fact one's true self, we are told. This idea runs parallel to Jesus, quoting scripture, saying "you are gods" (hence strongly implying "you are God") and to saying that he would bring to his right mind, or wake up, the person who turns to him. Those who turn to him are, says Paul, junior partners in Christ, welded into a spiritual oneness. All share the Holy Spirit, an inexhaustible fount of wisdom and cheer. In other words, they share in God's mind. So if believers have God's Spirit, they begin to awaken – sometimes very slowly – to their true, higher selves. They are returning to the state of perfect oneness from which their angels – atmans – have fallen.

Also, we are told, that in Vedanta recognizing oneself as atman is at the same time recognizing one's true self as Brahman. "An individual person is really just one aspect, one of infinitely many transient manifestations, of the One." Even so, there is plenty of room for interpretation as to whether Brahman is to be considered as the One who created those manifestations, or is identical to them, or is incomprehensibly different from them.

So, I suppose many Buddhists and some Vedantists turn away from the concept of vast, unfathomable mind. Yet are they not reaching toward superior mind or consciousness for their destinies? Why should such greatly enlightened minds be the pinnacle of the cosmos? It seems to me that that would mean that something that is less than the cosmos would still be superior to it, even if, perhaps, only temporarily. (Do you hear an echo of the ontological proof of God's existence in that argument?)

Yet, we must be careful here because of the apparent difference, for Buddhists, between mind and consciousness. In Buddhist parlance, the idea is to empty one's mind or self, obtaining the state of the anatman, which essentially means no-mind. That is to say, the Buddhists equate the human mind with the self, which needs to go away in order for the person to reach a state of bliss. From my perspective, both the Vedantist and Buddhist ideas are summed up by the New Testament injunction that one must die to self, to lose one's carnal mind (stop being a meat-head).

It should be noted that the authors say that Buddhists, in general, view Brahman and atman as illusions. Yet, if there is no ground of being, what is it that Buddhists are attempting to reach? How can any kind of eventuality exist without a ground of being?

Now, the Buddhist aim of enlightenment, either in this life and this body, or in a future life and body, yields this puzzle:

What is it that will suffer or experience bliss in the future? If the basic Buddhist theory holds that the objects of all desires are transitory, that the mind and soul are both temporarily existent illusions, that nothing lasts forever, then why desire a state of non-mind bliss, that supposedly implies an end to suffering? "You" won't be "there" to enjoy nothing anyway. Similarly, why worry about karma (you reap what you sow) in a subsequent life if it isn't really you proceeding to that next life?

So then, a Buddhist would desire to share in the bliss of Nirvana. He or she does yearn for some continuity of existence between his or her present state and the future. Of course, Buddhists will attribute such a contradiction to the inadequacy, when it comes to sublime mysteries, of human logic and language.

(We acknowledge that the Northern – Mahayana – school favors that devotees strive to become boddhisatvas, or enlightened beings, who delay attainment of Nirvana in order to help others become free of the bondage of suffering, whereas the Southern – Theravada – school favors Nirvana first followed by the helping of others. In either case, our puzzle remains.)
In response James Conant, a Buddhist, quotes Chögyam Trungpa:
The bad news is you’re falling through the air, nothing to hang on to, no parachute. The good news is, there’s no ground.

We can draw a parallel here based on these scriptures:

Psalm 46:10
Be still, and know that I am God: I will be exalted among the heathen, I will be exalted in the earth.
Being still here, I suggest, implies a deep, meditative awareness, letting our transitory thoughts and desires subside so as to permit the "ground of being" to be heard.

1 Kings 19:9-12
9 And [Elijah] came thither unto a cave, and lodged there; and, behold, the word of the Lord came to him, and he said unto him, What doest thou here, Elijah?
10 And he said, I have been very jealous for the Lord God of hosts: for the children of Israel have forsaken thy covenant, thrown down thine altars, and slain thy prophets with the sword; and I, even I only, am left; and they seek my life, to take it away.
11 And he said, Go forth, and stand upon the mount before the Lord. And, behold, the Lord passed by, and a great and strong wind rent the mountains, and brake in pieces the rocks before the Lord; but the Lord was not in the wind: and after the wind an earthquake; but the Lord was not in the earthquake:
12 And after the earthquake a fire; but the Lord was not in the fire: and after the fire a still small voice.
At the core of existence is God. He is not "in" the phenomena, even though he causes them. (I note that there is a distinction between the "word of the Lord" that asked Elijah why he was hiding in a cave and the "still small voice." I suggest that Elijah was led to commune with God at a deeper level, at the "ground of being" if you like.

Mark 4:37-40
37 And there arose a great storm of wind, and the waves beat into the ship, so that it was now full.
38 And he was in the hinder part of the ship, asleep on a pillow: and they awake him, and say to him, Master, care you not that we perish?
39 And he arose, and rebuked the wind, and said to the sea, Peace, be still. And the wind ceased, and there was a great calm.
40 And he said to them, Why are you so fearful? how is it that you have no faith?
41 And they feared exceedingly, and said one to another, What manner of man is this, that even the wind and the sea obey him?
The world's phenomena, that we take to be so real, are subject to the human mind when it is in accord with God's mind.

A key difference between the Christian and Eastern outlooks is the assurance that Jesus will assist the believer to die to self (granting the fact that it doesn't always appear that very many believers actually do so).

Matthew 16:25
For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it.
For a list of other supporting scriptures, please see:
https://zion78.blogspot.com/2018/02/we-must-die-to-self.html

The spiritual seekers of ancient India had had some important revelations. Yet, in Christian eyes, they were yearning for the big revelation that did not occur until the resurrection of Jesus.

We observe that Jesus himself pulled in those of low estate, who were acutely conscious of their need and not so inclined to intellectualize themselves out of drinking in the water of life. The "poor in spirit" (meek) are the ones positioned to break through the barrier of self-justifying delusion. Even today, as through the centuries, very strong belief flourishes best among the poor and lowly.

Matthew 11:28-29
28 Come unto me, all ye that labour and are heavy laden, and I will give you rest.
29 Take my yoke upon you, and learn of me; for I am meek and lowly in heart: and ye shall find rest unto your souls.

. Brings to mind
  • Free fall in orbit or outer space
  • Life in the amniotic sack
  • The "unbearable lightness of being."

Chapter 12
Proto-integers and (very) naive classes


Below is a another bit of philosophy of mathematics. I am well aware that readership will be quite limited.

Deriving the four arithmetic operations
without appeal to standard sets


Aim
We use a Peano-like approach to build "proto-integers" from "proto-sets." These proto-integers are then used to justify numerical quantifiers that can then be applied to sets forthwith, alongside the standard ∀ and ∃ (and perhaps ∃!) and accepting the "not" sign, ~ .

As we shall see, however, it is not required that we call our objects "sets" -- although psychologically it is hard to distinguish them from some sort of set or other, as the next paragraph indicates. Though it is a useful distinction elsewhere, no attempt is made to distinguish the word "set" from "class."

Once we axiomatize our system, we find implied a priori proto-sets. These are well justified by Quine's argument concerning how children learn abstraction and communication. That is, a word represents an idea that does not precisely match every imagined instance. So it becomes necessary to say that a word represents an idea associated with other ideas or images. These secondary ideas are known as properties or attributes. So a word represents an abstracted idea, shorn of the potential distinctive properties.

From this basic law of thought, the idea of collection, class or set must follow. So we are entitled to accept such primitive sets as self-evident. Beyond this we may not go without formulating a proper set theory with an associated system of logic. But what we may do is apply the numerical quantifiers to these primitive sets right away. We don't need to establish the foundations of arithmetic in terms of a proper class theory or to define numbers with formal sets, as in the von Neumann derivation of integers.

Our method does in fact derive the basic arithmetical operations, but this is a frill that does not affect the basic aim of our approach. We indulge in anachronisms when the method is applied to "advanced" systems for which we have not troubled to lay the groundwork.

So once we have these integer quantifiers, we may then go on to establish some formal class theory or other, such as ZFC or NBG. If we like, we can at some point dispense with the proto-integers and accept, say, the von Neumann set theoretic definition of integer.

There is nothing terribly novel in this method. The point is to show that we can accompany basic set theory with exact integer quantifiers right away.

Method

Some Euclidean axioms that we appropriate
1. A line on the plane may be intersected by two other lines A and B such that the distance from A to B is definite (measurable in principle with some yardstick) which we shall call magnitude AB.

2. Magnitude, or distance, AB can be exactly duplicated with another intersecting line C such that AB and CA have no distance between their nearer interior end points.

3. Some line A may be intersected by a line B such that A and B are perpendicular.
From these, we are able to construct and imply two parallel lines A and B, each of which is intersected by lines of a "unit" length apart and we can arrange that the perpendicular distance from A to B is unity. In other words, we have a finite strip of squares all lined up. We have given no injunction against adding more squares along the horizontals.

We define "0" magnitude as the distance covered by the intersection of two lines.

Now, for graphic purposes, we shall imagine an S perfectly inscribed within a square. The "S" is for our convenience; it is the square upon which we rely.

Now as we consider a strip of squares (say beginning on the right with the eye going leftward), we observe that there is a vertical line at the beginning on the right. Beyond that there is no square. We shall symbolize that condition with a "0" .

At this point we interject that the term "adjacent square" means that squares have a common side and that we are pointing to, or designating, a specific square.

The word "consecutive" implies that there is some strip of squares such that if we examine any specific square we find that it is always adjacent to some other(s). To be more precise, we use the concepts of "left" and "right," which, like "top" and "bottom," are not defined. If at the leftmost or rightmost side of a strip of squares (or "top or bottom"), we find there is no adjacent square, we may use that extreme square as a "beginning." We then sheer off only that square.

We then repeat the process. There is now a new "beginning" square that meets the original conditions. This is then shorn off. This algorithm may be performed repeatedly. This process establishes the notions of "consecutive" and "consecutive order." If there is no "halt" order implied, then the process is open-ended. We cannot say that an infinity is really implied, as we have not got to more advanced class theory (which we are not going to do).

We can say that a halt order is implied whenever we have decided to name a strip, which becomes obvious from what follows.

So then, all this permits us to use the unoriginal symbolism

S0

Under Peano's rule, 0 has no predecessor and every S has a predecessor.

From this we obtain 0 --> no square to the right.

S0 is the successor of 0 and is named "1."

SS0 is called the successor of "1" and is named "2."

From here we may justify "any" constructible integer without resort to mathematical induction, an axiom of infinity and an infinite axiom scheme. We do not take the word "any" as it is used with the "all" quantifier. Rather, what we mean is that if a number is constructible by the open-ended successor algorithm, it can also be used for counting purposes.

Now we derive the arithmetical operations.

Addition

Example

"1 + 2 = 3" is justified thus:

We write S0 + SS0, retaining the plus sign as convenient.

This tells us to eliminate or ignore the interior '0' and slide the left-hand S (or S's, as the case may be) to the right, thus giving the figure

SSS0.

We can, if desired, be fussy and not talk about sliding S's but about requiring that the strip S0 must extend leftward from the leftward vertical side of strip SS0 on ground that 0 implies no distance between the two strips.
"Two" is not defined here as a number, but as a necessary essential idea that we use to mean a specified object of attention and an other specified object of attention. I grant that the article "an" already implies "oneness" and the word "other" already implies "twoness." Yet these are "proto-ideas" and not necessarily numbers. BUT, since we have actually defined numbers by our successor algorithm, we are now free to apply them to our arithmetic operations.

Subtraction

Example

5 - 2 = 3

Subtraction is handled by first forming a third horizontal parallel line that is also a unit distance apart from the nearest other parallel. Thus we have two rows of squares that can be designated by S place-holders.

We write

SSSSS0
000SS0

where we have designated with 0's those squares on the second strip that are to the left of the bottom successor strip and directly under squares of the top successor strip. Hence, we require that only vertical strips with no 0's be erased and collapsed (again, we could be more finicky in defining "erase and collapse" but won't be bothered).

The result is

SSS[SS]0 = SSS0

Note that we may reverse the procedure to obtain negative numbers.

A negative number is defined for K - J with J < K. The less-than relation is determined if we have two strips, as above, in which one strip contains leftward 0's. The strip with the leftward 0's is "less than" the strip without leftward 0's.

So then,

1 - 2 = -1

results from

0S0
SS0

Similarly we cross out the vertical S's, preserving the top strip, which gives

0S0

Though that last expression is OK, for symmetry, we should drop the right-hand 0. In that case 0S is -1, 0SS is -2, and so on.

We require (this must be an axiom) that

Axiom: 0S + S0 = 00 = 0

In which case, we may reduce matching opposed numbers K + -K to the form

0S
S0

which is 0.

For example, 0SS + SS0 gives
0SSS
SSS0

and by erasing the columns of S's (no 0's), we obtain the 0 identity.

Multiplication

Example

3 x 2 = 6

We decide that we will associate the left of the multiplication sign with horizontal rows and the right with vertical columns.

Thus,
SS0
SS0
SS0
000

We match each horizontal "2" with a strip under it, until we have reached the vertical number "3." As a nicety, I have required a bottom row of 0's, to assure that the columnular number is defined. We then slide each row onto the top framework of squares, thus:

SS0SS0SS0

However, interior 0's imply that there is no distance between sub-strips. Hence we erase them and of course get

SSSSSS0

which we have decided to name "6."

Division

Exact division

i) If two strips are identical we say that only one name is to be assigned -- say, "K." That is, they both take the same name. Thus if two strips completely match (no difference in magnitude), then the number K is said to divide by K.

ii) Let a shorter strip be placed under another strip.
SSSS0
00SS0

The shorter strip is said to divide exactly the longer if the shorter strip is replicated and placed leftward under the longer strip and, after erasure of interior 0's, the two row strips are identical.
SSSS0
SSSS0


But before we may do that, we must ascertain what number the exact division yields. In other words, we have proved that 2 divides 4 exactly. But we have not shown that 4/2 = 2. This requires another step, which harks back to the multiplication procedure.

Each sub-strip in row 2 must be placed in a new row, using the former row 2 as the present top row, and, for clarity, we add a row of 0's. From the above example:
SS0
SS0
000

We may now read down the left-hand column to obtain the desired divisor.

SSSSSSSSS0

divided by the strip SSS0 yields
SSS0
SSS0
SSS0
0000

By this, we have proved that the number known as 9 is exactly divided by the number called 3 into 3 strips, all with the name 3.

Rationals

Rationals are defined by putting one successor number atop another and calling it a ratio.

We do not permit (axiom) division by 0 or, that is, for a ratio's denominator to have no predecessor.
0SS0
SSS0

is 2/3 and likewise,
SSS0
0SS0

is 3/2.

and
0S0
SS0

yields - 1/2

while
0S0
0S0

yields + 1

and
00S0
0SS0

yields + 1/2

We do not enter into the subject of equivalence classes, which at this point would be a highly anachronistic topic.

So then we can say, for example, (2x ∈ X)(x,a), which reads there are at least 2 x in X which have the property a. Of course, that does not mean we are not obliged to build up the sets and propositions in some coherent fashion.

By establishing proto-integers through the use of some routine axioms, we are able to give exact quantifiers for any sets we intend to build. As a bonus, we have established the basic arithmetic operations without resort to formal set theory.

The definition of successor/predecessor relation is easily derived from the discussion above of "consecutive" and "next."

For our purposes, a proto-set, or "set," is a successor number the elements of which are predecessors (so this is similar to the Von Neumann method in which a successor is defined "x U {x}." Our method, I would say, is a tad more primitive.

Our "elements" may be visualized by placing each immediate predecessor on an adjacent horizontal strip, as shown:
S S S 0
  S S 0      S 0          0

Or we may have an equivalent graph
S S S 0
0 S S 0
0 0 S 0
0 0 0 0

which is handy because we now have a matrix with its row and column vectors -- though of course we are not anywhere near that level of abstraction in our specific business.

So, if we like, we may denote each strip with the name "set." The bottom strip we call the "0 set" which means that it is the class with no predecessor. That it is equivalent to the empty set of standard class theories is evident because it implies no predecessor, which is to say there can be no "element" beneath it. Also, note that Russell's paradox does not arise in this primitive system, because a strip number's "element" (we are free to avoid that loaded word if we choose) strips are always below it.

Now note that the top number has an S on the extreme left -- rather than a 0 -- such that S3 ⇒  S2 ⇒ S1 ⇒ 0 (where the sub numbers are only for our immediate convenience and have no intrinsic meaning; we could as well use prime marks or arbitrary names, as in Sdog ⇒ Sstarship.


In any case we may, only if we so desire, name the entire graphic above as a "set" or "proto-set." Similarly, we may so name each sub-graphic that occurs when we erase a top strip. Obviously, this parallels the usual set succession rules.

Though our naming these graphics "sets" is somewhat user-friendly, it is plainly unnecessary. We could as well name them with some random string of characters, say "xxjtdbn." The entire graph has the general name "xxjtdbn." Under it is another xxjtdbn, which differs from the other and so must take another name as a mark of distinction. In fact, every permissible graph must take some distinct name.

So for the entire graph we have "xxjtdbn." For the "next" sub-graph, we have "agbfsaf." For the one below that, we have "dtdmitg." And for the "0" strip we have "zbhikeb."

We are expected to know that each name applies to a specific strip and thus is either a successor or a predecessor or both. So then, we don't really need to employ the abstract concept of "set" (though we are employing abstract Euclidean axioms and a couple of other axioms).

Now if we write, for example,

(2x ∈ X)P

we seem to be saying that the more "advanced" set definitions are in force for "X" and so "2x ∈ X" is not a legitimate quantifier of the assertion (=proposition) P.

It is true that we are not done with our quantifier design. We are saying that "2" is a name given our graphic that is also known as  "agbfsaf." We accept that there may be objects of some sort that go by the generic name "x." We must be able to establish a 1-1 correspondence between our agbfsaf graphic and any x's.

That is, we must be able to draw a single line (at least notionally) between every strip of agbfsaf, except the 0 strip, and one and only one of the x's. If that 1-1 correspondence is exact -- no unconnected ends -- then we may write (2!x ∈ X)P, which tells us that there are exactly 2 x's that apply to P "truthfully."

We have implied in our notation that there is a class X of which the x's are members. This isn't quite necessary. We may just say that X is a name for various x's. We might even say that we have a simple pairing system (aka "binary relation," though we must beware this terminology as probably anachronistic) such that xo,X, meaning that x may vary but that X may not.

Now suppose we wish to talk about equality, as in P means "x = x."

We may write (∀ x)(x = x), (∃ x)(x = x) and ~(∀ x)(x = x) in standard form.

We interrupt here to deny that the ∀ quantifier must be taken to require a set. Instead of using the concept "set," we say that there exists some formation of triples y,p,Y, such that y varies but p and Y do not. Y is associated with distinct y's.

Also, we are prissy about the words "all," "any" and "each." If a set or class is thought to be strictly implied by the word "all," then we disavow that word. Rather the ∀ quantifier is to be read as meaning that "any" y may be paired with (p,Y). That is, we say that we may select a y arbitrarily and must find that it has the name Y and the property p under consideration.[1]

Though the word "each" (="every") may connote "consecutive order" in the succession operation described above, it is neater for our purposes to use only the word "any" in association with the all quantifier.

Now if we mean by x one of the graphic "numbers," each graph shows a 1-1 correspondence with a copy of itself and so we can establish the first two statements as true and the last as false. If we mean that x represents arbitrary ZFC or NBG class theoretic numbers, we can still use the correspondence test (though we need not deal with indefinite unbounded algorithms -- loosely dubbed "infinity").

Further, we may use the correspondence test for, say, the graphic named "2." If we find that we can draw single lines connecting strips of "2" with objects known as x's among NBG or ZFC "numbers," then we can say 2x(x = x). That means that graphic "2" is 1-1 with some part of ZFC or NBG. Or, we would normally say, "there are at least 2 x's in NBG or ZFC that are equal to themselves (where "self" = "duplicate")," as opposed to 2!x(x = x), which would normally be verbalized as "there are exactly 2 x's in NBG or ZFC that are equal to themselves (or their duplicates). The last statement can only be true if we specify what x's we are talking about.

I concede that in these last few paragraphs I mix apples and oranges. Why would we need the graphic "numbers" if we already have ZFC or NBG? But, we do have proof of principle -- that much can be done with these successor graphics, or what some might term pseudo-sets.
1. In his book Logic for Mathematicians (Chelsea Publishing 1978, McGraw Hill 1953), J. Barkley Rosser cautions against the word any. "Sometimes any means each and sometimes it means some. Thus, sometimes "for any x..." means (x) and sometimes "for any x..." means (Ex)."
¶ [Note: (x) is an old-fashioned way to denote ∀x and (Ex) is Rosser's way of denoting ∃x.]
¶ After giving an ambiguous example, Rosser says, "If one wishes to be sure that one will be understood, one should never use any in a place where either each or some can be used. For instance, "I will not violate any law." The statement "I will not violate each law" has quite a different meaning, and the statement, "I will not violate some law" might be interpreted to mean that there is a particular law which I am determined not to violate.
¶ "Nonetheless," says Rosser, "many writers use any in places where each or some would be preferable."

Version 5 after 4 very rough drafts

<small><i>Chapter 18</i></small><br> Chasing Schrödinger's cat

This chapter added in November 2022 Every now and then, I review an old book. This review written in 2022 concerns a title from 2019, whi...