Friday, February 26, 2021

Chapter 7
The cosmos cannot
be fully modeled
as a Turing machine


¶ This essay, which had been deleted by Angelfire, was recovered via the Wayback Machine.
Please note: The word 'or' is usually used in the following discussion in the inclusive sense.

Note A, added March 3, 2021
The unspoken assumption behind Principia Mathematica, Lamda calculus, Goedelian recursion theory, Boolean circuit logic and Turing machinery is either
¶ Naive linear time and naive linear space, or
¶ Newton's absolute time and absolute space.
Adopting an Einsteinian view of spacetime or a Hawkensian notion of time becoming asymptotic to a line intersecting an unreachable singularity certainly knocks out any Turing type model.

(We note that Goedel came to view time as a nonexistent illusion in Parmenides's sense. In fact, Goedel was a modern version of Zeno of Elea.)
Note B, added March 4, 2021

On the cosmic scale, the time issue calls into question the concept of thermodynamic entropy1a At bottom, standard entropy assumes Newtonian mechanics, which requires one or the other of the assumptions about time given above. If we modify the physics behind entropy with quantum mechanics, we enter the quagmire of quantum weirdness, which certainly undermines both naive and Newtonian concepts of time.


Many subscribe to the view that the cosmos is essentially a big machine which can be analyzed and understood in terms of other machines.

A well-known machine is the general Turing machine, which is a logic system that can be modified to obtain any discrete-input computation. Richard Feynman, the brilliant physicist, is said to have been fascinated by the question of whether the cosmos is a computer -- originally saying no but later insisting the opposite. As a quantum physicist, Feynmen would have realized that the question was difficult. If the cosmos is a computer, it certainly must be a quantum computer. But what does that certainty mean? Feynman, one assumes, would also have concluded that the cosmos cannot be modeled as a classical computer, or Turing machine.1

Let's entertain the idea that the cosmos can be represented as a Turing machine or Turing computation. This notion is equivalent to the idea that neo-classical science (including relativity theory) can explain the cosmos. That is, we could conceive of every "neo-classical action" in the cosmos to date -- using absolute cosmic time, if such exists -- as being represented by a huge logic circuit, which in turn can be reduced to some instance (computation) of a Turing algorithm. God wouldn't be playing dice. A logic circuit always follows if-then rules, which we interpret as causation. But, as we know, at the quantum level, if-then rules only work (with respect to the observer) within constraints, so we might very well argue that QM rules out the cosmos being a "classical" computer.

On the other hand, some would respond by arguing that quantum fuzziness is so miniscule on a macroscopic (human) scale, that the cosmos can be quite well represented as a classical machine. That is, the fuzziness cancels out on average. They might also note that quantum fluctuations in electrons do not have any significant effect on the accuracy of computers -- though this may not be true as computer parts head toward the nanometer scale. (My personal position is that there are numerous examples of the scaling up or amplification of quantum effects. "Schrodinger's cat" is the archetypal example.)

Of course, another issue is that the cosmos should itself have a wave function that is a superposition of all possible states -- until observed by someone (who?). (I will not proceed any further on the measurement problem of quantum physics, despite its many fascinating aspects.)

Before going any further on the subject at hand, we note that a Turing machine is finite (although the set of such machines is denumerably infinite). So if one takes the position that the cosmos -- or specifically, the cosmic initial conditions (or "singularity") -- are effectively infinite, then no Turing algorithm can model the cosmos. So let us consider a mechanical computer-robot, A, whose program is a general Turing machine. A is given a program that instructs the robotic part of A to select a specific Turing machine, and to select the finite set of initial values (perhaps the "constants of nature"), that models the cosmos.

What algorithm is used to instruct A to choose a specific cosmos-outcome algorithm and computation? This is a typical chicken-or-the-egg self-referencing question and as such is related to Turing's halting problem, Godel's incompleteness theorem and Russell's paradox.

If there is an algorithm B to select an algorithm A, what algorithm selected B? -- leading us to an infinite regression. Well, suppose that A has the specific cosmic algorithm, with a set of discrete initial input numbers, a priori? That algorithm, call it Tc, and its instance (the finite set of initial input numbers and the computation, which we regard as still running), imply the general Turing algorithm Tg. We know this from the fact that, by assumption, a formalistic description of Alan Turing and his mathematical logic result were implied by Tc. On the other hand, we know that every computable result is programable by modifying Tg. All computable results can be cast in the form of "if-then" logic circuits, as is evident from Turing's result.

So we have

Tc <--> Tg

Though this result isn't clearly paradoxical, it is a bit disquieting in that we have no way of explaining why Turing's result didn't "cause" the universe. That is, why didn't it happen that Tg implied Turing who (which) in turn implied the Big Bang? That is, wouldn't it be just as probable that the universe kicked off as Alan Turing's result, with the Big Bang to follow? (This is not a philisophical question so much as a question of logic.)

Be that as it may, the point is that we have not succeeded in fully modeling the universe as a Turing machine.

The issue in a nutshell: how did the cosmos instruct itself to unfold? Since the universe contains everything, it must contain the instructions for its unfoldment. Hence, we have the Tc instructing its program to be formed.

Another way to say this: If the universe can be modeled as a Turing computation, can it also be modeled as a program? If it can be modeled as a program, can it then be modeled as a robot forming a program and then carrying it out?

In fact, by Godel's incompleteness theorem, we know that the issue of Tc "choosing" itself to run implies that the Tc is a model (mathematically formal theory) that is inconsistent or incomplete. This assertion follows from the fact that the Tc requires a set of axioms in order to exist (and hence "run"). That is, there must be a set of instructions that orders the layout of the logic circuit. However, by Godel's result, the Turing machine is unable to determine a truth value for some statements relating to the axioms without extending the theory ("rewiring the logic circuit") to include a new axiom.

This holds even if Tc = Tg (though such an equality implies a continuity between the program and the computation which perforce bars an accurate model using any Turing machines).

So then, any model of the cosmos as a Boolean logic circuit is inconsistent or incomplete. In other words, a Turing machine cannot fully describe the cosmos.

If by "Theory of Everything" is meant a formal logico-mathematical system built from a finite set of axioms [though, in fact, Zermelo-Frankel set theory includes an infinite subset of axioms], then that TOE is either incomplete or inconsistent. Previously, one might have argued that no one has formally established that a TOE is necessarily rich enough for Godel's incompleteness theorem to be known to apply. Or, as is common, the self-referencing issue is brushed aside as a minor technicality.

Of course, the Church thesis essentially tells us that any logico-mathematical system can be represented as a Turing machine or set of machines and that any logico-mathematical value that can be expressed from such a system can be expressed as a Turing machine output. (Again, Godel puts limits on what a Turing machine can do.)

So, if we accept the Church thesis -- as most logicians do -- then our result says that there is always "something" about the cosmos that Boolean logic -- and hence the standard "scientific method" -- cannot explain.

Even if we try representing "parallel" universes as a denumerable family of computations of one or more Turing algorithms, with the computational instance varying by input values, we face the issue of what would be used to model the master programer.

Similarly, one might imagine a larger "container" universe in which a full model of "our" universe is embedded. Then it might seem that "our" universe could be modeled in principle, even if not modeled by a machine or computation modeled in "our" universe. Of course, then we apply our argument to the container universe, reminding us of the necessity of an infinity of extensions of every sufficiently rich theory in order to incorporate the next stage of axioms and also reminding us that in order to avoid the paradox inherent in the set of all universes, we would have to resort to a Zermelo-Frankel-type axiomatic ban on such a set. Now we arrive at another point: If the universe is modeled as a quantum computation, would not such a framework possibly resolve our difficulty?

If we use a quantum computer and computation to model the universe, we will not be able to use a formal logical system to answer all questions about it, including what we loosely call the "frame" question -- unless we come up with new methods and standards of mathematical proof that go beyond traditional Boolean analysis.

Let us examine the hope expressed in Stephen Wolfram's New Kind of Science that the cosmos can be summarized in some basic rule of the type found in his cellular automata graphs.

We have no reason to dispute Wolfram's claim that his cellular automata rules can be tweaked to mimic any Turing machine. (And it is of considerable interest that he finds specific CA/TM that can be used for a universal machine.)

So if the cosmos can be modeled as a Turing machine then it can be modeled as a cellular automaton. However, a CA always has a first row, where the algorithm starts. So the algorithm's design -- the Turing machine -- must be axiomatic. In that case, the TM has not modeled the design of the TM nor the specific initial conditions, which are both parts of a universe (with that word used in the sense of totality of material existence).

We could of course think of a CA in which the first row is attached to the last row and a cylinder formed. There would be no specific start row. Still, we would need a CA whereby the rule applied with aribitrary row n as a start yields the same total output as the rule applied at arbitrary row m. This might resolve the time problem, but it is yet to be demonstrated that such a CA -- with an extraordinarily complex output -- exists. (Forgive the qualitative term extraordinarily complex. I hope to address this matter elsewhere soon.)

However, even with time out of the way, we still have the problem of the specific rule to be used. What mechanism selects that? Obviously it cannot be something from within the universe. (Shades of Russell's paradox.)
Footnotes
1a. For more on this, see The Janus Point -- A New Theory of Time by Julian Barbour (Basic Books 2020).
1. Informally, one can think of a general Turing machine as a set of logic gates that can compose any Boolean network. That is, we have a set of gates such as "not", "and," "or," "exclusive or," "copy," and so forth. If-then is set up as "not-P or Q," where P and Q themselves are networks constructed from such gates. A specific Turing machine will then yield the same computation as a specific logic circuit composed of the sequence of gates.

By this, we can number any computable output by its gates. Assuming we have less than 10 gates (which is more than necessary), we can assign a base-10 digit to each gate. In that case, the code number of the circuit is simply the digit string representing the sequence of gates.

Note that circuit A and circuit B may yield the same computation. Still, there is a countable infinity of such programs, though, if we use any real for an input value, we would have an uncountable infinity of outputs. But this cannot be, because an algorithm for producing a real number in a finite number of steps can only produce a rational approximation of an irrational. Hence, there is only a countable number of outputs.
¶ Dave Selke, an electrical engineer with a computer background, has made a number of interesting comments concerning this page, spurring me to redo the argument in another form. The new essay is entitled On Hilbert's sixth problem.
¶ Thanks to Josh Mitteldorf, a mathematician and physicist, for his incisive and helpful comments. Based upon a previous draft, Dr. Mitteldorf said he believes I have shown that, if the universe is finite, it cannot be modeled by a subset of itself but he expressed wariness over the merit of this point.

Discussion of Wolfram cellular automata was added to Draft 3. The notes at the beginning of the chapter were added in Draft 4.

Chapter 6
Einstein, Sommerfeld and the twin paradox


This essay is in no way intended to impugn the important contributions of Einstein or other physicists. Everyone makes errors.
The paradox
Einstein's groundbreaking 1905 relativity paper, "On the electrodynamics of moving bodies," contained a fundamental inconsistency which was not addressed until 10 years later, with the publication of his paper on gravitation.

Many have written on this inconsistency, known as the "twin paradox" or the "clock paradox" and more than a few have not understood that the "paradox" does not refer to the strangeness of time dilation but to a logical inconsistency in what is now known as the special (for "special case") theory of relativity.

Among those missing the point: Max Born in his book on special relativity1, George Gamow in an essay and Roger Penrose in Road to Reality2, and, most recently, Leonard Susskind in The Black Hole War.3

Among those who have correctly understood the paradox are topologist Jeff Weeks (see link above) and science writer Stan Gibilisco4, who noted that the general theory of relativity resolves the problem.

As far back as the 1960s, the British physicist Herbert Dingle5 called the inconsistency a "regrettable error" and was deluged with "disproofs" of his assertion from the physics community. (It should be noted that Dingle's 1949 attempt at relativistic physics left Einstein bemused.6) Yet every "disproof" of the paradox that I have seen uses acceleration, an issue not addressed by Einstein until the general theory of relativity. It was Einstein who set himself up for the paradox by favoring the idea that only purely relative motions are meaningful, writing that various examples "suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest." [Electrodynamics translated by Perett and Jeffery and appearing in a Dover (1952) reprint]. In that paper, he also takes pains to note that the term "stationary system" is a verbal convenience only.7

But later in Elect., Einstein offered the scenario of two initially synchronized clocks at rest with respect to each other. One clock then travels around a closed loop, and its time is dilated with respect to the at-rest clock when they meet again. In Einstein's words: "If we assume that the result proved for a polygonal line is also valid for a continuously curved line, we arrive at this result: If one of two synchronous clocks at A is moved in a journey lasting t seconds, then by the clock which has remained at rest the traveled clock on its arrival at A will be 1/2tv2/c2 slow."

Clearly, if there is no preferred frame of reference, a contradiction arises: when the clocks meet again, which clock has recorded fewer ticks?

Both in the closed loop scenario and in the polygon-path scenario, Einstein avoids the issue of acceleration. Hence, he does not explain that there is a property of "real" acceleration that is not symmetrical or purely relative and that that consequently a preferred frame of reference is implied, at least locally.

The paradox stems from the fact that one cannot say which velocity is higher without a "background" reference frame. In Newtonian terms, the same issue arises: if one body is accelerating away from the other, how do we know which body experiences the "real" force? No answer is possible without more information, implying a background frame.

In comments published in 1910, the physicist Arnold Sommerfeld, a proponent of relativity theory, "covers" for the new paradigm by noting that Einstein didn't really mean that time dilation was associated with purely relative motion, but rather with accelerated motion; and that hence relativity was in that case not contradictory.

Sommerfeld wrote: "On this [a time integral and inequality] depends the retardation of the moving clock compared with the clock at rest. The assertion is based, as Einstein has pointed out, on the unprovable assumption that the clock in motion actually indicates its own proper time; i.e. that it always gives the time corresponding to the state of velocity, regarded as constant, at any instant. The moving clock must naturally have been moved with acceleration (with changes of speed or direction) in order to be compared with the stationary clock at world-point P. The retardation of the moving clock does not therefore actually indicate 'motion,' but 'accelerated motion.' Hence this does not contradict the principle of relativity." [Notes appended to Space and Time, a 1908 address by Herman Minkowski, Dover 1952, Note 4.]

However, Einstein's 1905 paper does not tackle the issue of acceleration and more to the point, does not explain why purely relative acceleration would be insufficient to meet the facts. The principle of relativity applies only to "uniform translatory motion" (Elect. 1905).

Neither does Sommerfeld's note address the issue of purely relative acceleration versus "true" acceleration, perhaps implicitly accepting Newton's view (below).

And, a review of various papers by Einstein seems to indicate that he did not deal with this inconsistency head-on, though in a lecture-hall discussion ca. 1912, Einstein said that the [special] theory of relativity is silent on how a clock behaves if forced to change direction but argues that if a polygonal path is large enough, accelerative effects diminish and (linear) time dilation still holds.

On the other hand, of course, he was not oblivious to the issue of acceleration. In 1910, he wrote that the principle of relativity meant that the laws of physics are independent of the state of motion, but that the motion is non-accelerated. "We assume that the motion of acceleration has an objective meaning," he said. [The Principle of Relativity and its Consequences in Modern Physics, a 1910 paper reproduced in Collected Papers of Albert Einstein, Hebrew University, Princeton University Press].

In that same paper Einstein emphasizes that the principle of relativity does not cover acceleration. "The laws governing natural phenomena are independent of the state of motion of the coordinate system to which the phenomena are observed, provided this system is not in accelerated motion."

Clearly, however, he is somewhat ambiguous about small accelerations and radial acceleration, as we see from the lecture-hall remark and from a remark in Foundation of the General Theory of Relativity (1915) about a "familiar result" of special relativity whereby a clock on a rotating disk's rim ticks slower than a clock at the origin.

General relativity's partial solution
Finally, in his 1915 paper on general relativity, Einstein addressed the issue of acceleration, citing what he called "the principle of equivalence." That principle (actually, introduced prior to 1915) said that there was no real difference between kinematic acceleration and gravitational acceleration. Scientifically, they should be treated as if they are the same.

So then, Einstein notes in Foundation, if we have system K and another system K' accelerating with respect to K, clearly, from a "Galilean" perspective, we could say that K was accelerating with respect to K'. But, is this really so?

Einstein argues that if K is at rest relative to K', which is accelerated, the oberserver on K cannot claim that he is being accelerated -- even though, in purely relative terms, such a claim is valid. The reason for this rejection of Galilean relativity: We may equally well interpret K' to be kinematically unaccelerated though the "space-time territory in question is under the sway of a gravitational field, which generates the accelerated motion of the bodies" in the K' system.

This claim is based on the principle of equivalence which might be considered a modification of his previously posited principle of relativity. By the relativity principle, Einstein meant that the laws of physics can be cast in invariant form so that they apply equivalently in any unformly moving frame of reference. (For example, |vb - va| is the invariant quantity that describes an equivalence class of linear velocities.)

By the phrase "equivalence," Einstein is relating impulsive acceleration (for example, a projectile's x vector) to its gravitational acceleration (its y vector). Of course, Newton's mechanics already said that the equation F = mg is a special case of F = ma but Einstein meant something more: that local spacetime curvature is specific for "real" accelerations -- whether impulsive or gravitational.

Einstein's "equivalence" insight was his recognition that one could express acceleration, whether gravitational or impulsive, as a curvature in the spacetime continuum (a concept introduced by Minkowski). This means, he said, that the Newtonian superposition of separate vectors was not valid and was to be replaced by a unitary curvature. (Though the calculus of spacetime requires specific tools, the concept isn't so hard to grasp. Think of a Mercator map: the projection of a sphere onto a plane. Analogously, general relativity projects a 4-dimensional spacetime onto a Euclidean three-dimensional space.)

However, is this "world-line" answer the end of the problem of the asymmetry of accelerated motion?

The Einstein of 1915 implies that if two objects have two different velocities, we must regard one as having an absolutely higher velocity than the other because one object has been "really" accelerated.

Yet one might conjecture that if two objects move with different velocities wherein neither has a prior acceleration, then the spacetime curvature would be identical for each object and the objects' clocks would not get out of step. But such a conjecture would violate the limiting case of special relativity (and hence general relativity); specifically, such a conjecture would be inconsistent with the constancy of the vacuum velocity of light in any reference frame.

So then, general relativity requires that velocity differences are, in a sense, absolute. Yet in his original static and eternal cosmic model of 1917, there was no reason to assume that two velocities of two objects necessarily implied the acceleration of one object. Einstein introduced the model, with the cosmological constant appended in order to contend with the fact that his 1915 formulation of GR apparently failed to account for the observed mass distribution of the cosmos.

Despite the popularity of the Big Bang model, a number of cosmic models hold the option that some velocity differences needn't imply an acceleration, strictly relative or "real."

Einstein's appeal to spacetime curvature to address the frame of reference issue is similar to Newton's assertion that an accelerated body requires either an impulse imputed to it or the gravitational force. There is an inherent local physical asymmetry. Purely relative motion will not do.

Einstein also brings up the problem of absolute relative motion in the sense of Newton's bucket. Einstein uses two fluid bodies in space, one spherical, S1 and another an ellipsoid of revolution, S2. From the perspective of "Galilean relativity," one can as easily say that either body is at rest with respect to the other.

But, the radial acceleration of S2 results in a noticeable difference: an equatorial bulge. Hence, says Einstein, it follows that the difference in motion must have a cause outside the system of the two bodies.

Of course Newton in Principia Mathematica first raised this point, noting that the surface of water in a rapidly spinning bucket becomes concave. This, he said, demonstrated that force must be impressed on a body in order for there to be a change in acceleration. Newton also mentioned the issue of the fixed stars as possibly of use for a background reference frame, though he does not seem to have insisted on that point. He did however find that absolute space would serve as a background reference frame.

It is interesting to note here that Einstein's limit c can be used as an alternative to the equatorial bulge argument. If we suppose that a particular star is sufficiently distant, then the x component of its radial velocity (which is uniform and linear) will exceed the velocity of light. Such a circumstance being forbidden, we are forced to conclude that the earth is spinning, rather than the star revolving around the earth. We see that, in this sense, the limit c can be used to imply a specific frame of reference. At this point, however, I cannot say that such a circumstance suffices to resolve the clock paradox of special relativity.

Interestingly, the problem of Newton's bucket is quite similar to the clock paradox of special relativity. In both scenarios, we note that if two motions are strictly relative, what accounts for a property associated with one motion and not the other? In both cases, we are urged to focus on the "real" acceleration.

Newton's need for a background frame to cope with "real" acceleration predates the 19th century refinement of the concept of energy as an ineffable, essentially abstract "substance" which passes from one event to the next. That concept was implicit in Newton's Principia but not explicit and hence Newton did not appeal to the "energy" of the object in motion to deal with the problem. That is, we can say that we can distinguish between two systems by examining their parts. A system accelerated to a non-relativistic speed nevertheless discloses its motion by the fact that the parts change speed at different times as a set of "energy transactions" occur. For example, when you step on the accelerator, the car seat moves forward before you do; you catch up to the car "because" the car set imparts "kinetic energy" to you.

But if you are too far away to distinguish individual parts or a change in the object's shape, such as from equatorial bulge, your only hope for determining "true" acceleration is by knowing which object received energy prior to the two showing a relative change in velocity.

Has the clock paradox gone away?
Now does GR resolve the clock paradox?

GR resolves the paradox non-globally, in that Einstein now holds that some accelerations are not strictly relative, but functions of a set of curvatures. Hence one can posit the loop scenario given in Electrodynamics and say that only one body can have a higher absolute angular velocity with respect to the other because only one must have experienced an acceleration that distorts spacetime differently from the other.

To be consistent, GR must reflect this asymmetry. That is, suppose we have two spaceships separating along a straight line whereby the distance between them increases as a constant velocity. If ship A's TV monitor says B's clock is ticking slower than A's and ship B's TV monitor says A's clock is ticking slower than B's, there must be an objective difference, nevertheless.

The above scenario is incomplete because the "real" acceleration prior to the opening of the scene is not given. Yet, GR does not tell us why a "real" acceleration must have occurred if two bodies are moving at different velocities.

So yes, GR partly resolves the clock paradox and, by viewing the 1905 equations for uniform motion as a special case of the 1915 equations, retroactively removes the paradox from SR, although it appears that Einstein avoided pointing this out in 1915 or thereafter.

However, GR does not specify a global topology (cosmic model) of spacetime, though Einstein struggled with this issue. The various solutions to GR's field equations showed that no specific cosmic model followed from GR. The clock paradox shows up in the Weeks model of the cosmos, with local space being euclidean (or rather Minkowskian). As far as this writer knows, such closed space geodesics cannot be ruled out on GR grounds alone.

Jeff Weeks, in his book The Shape of Space, points out that though physicists commonly think of three cosmic models as suitable for GR, in fact there are three classes of 3-manifolds that are both homogenous and isotropic (cosmic information is evenly mixed and looks about the same in any direction). Whether spacetime is mathematically elliptic, hyperbolic or euclidean, there are many possible global topologies for the cosmos, Weeks says.

One model, described by Weeks in the article linked above, permits a traveler to continue straight in a closed universe until she arrives at the point of origin. Again, to avoid contradiction, we are required to accept a priori that an acceleration that alters a world line has occurred.

Other models have the cosmic time axis following hyperbolic or elliptical geometry. Originally, one suspects, Einstein may have been skeptical of such an axis, in that Einstein's abolishment of simultaneity effectively abolished the Newtonian fiction of absolute time. But physicist Paul Davies, in his book About Time, argued that there is a Big Bang oriented cosmic time that can be approximated quite closely.

Kurt Goedel's rotating universe model left room for closed time loops, such that an astronaut who continued on a protracted space flight could fly into his past. This result prompted Godel to question the reality of time in general relativity. Having investigated various solutions of GR equations, Goedel argued that a median of proper times of moving objects, which James Jeans had thought to serve as a cosmic absolute time, was not guaranteed in all models and hence should be questioned in general.

Certainly we can agree that Goedel's result shows that relativity is incomplete in its analysis of time.

Mach's principles
Einstein was influenced by the philosophical writings of the German physicist Ernst Mach, whom he cites in Foundations.

According to Einstein (1915) Mach's "epistomological principle" says that observable facts must ultimately appear as causes and effects. Mach believed that the brain organizes sensory data into knowledge and that hence data of scientific value should stem from observable, measurable phenomena. This philosophical viewpoint was evident in 1905 when Einstein ruthlessly ejected the Maxwell-Lorentzian ether from physics.

Mach's "epistomological principle" led Mach to reject Newtonian absolute time and absolute space as unverifiable and made Einstein realize that the Newtonian edifice wasn't sacrosanct. However, in 1905 Einstein hadn't replaced the edifice with something called a "spacetime continuum." Curiously, later in his career Einstein impishly but honestly identified this entity as "the ether."

By rejecting absolute space and time, Mach also rejected the usual way of identifying acceleration in what is known as Mach's principle:

Version A. Inertia of a ponderable object results from a relationship of that object with all other objects in the universe.

Version B. The earth's equatorial bulge is not a result of absolute rotation (radial acceleration) but is relative to the distant giant mass of the universe.

For a few years after publication of Foundations, Einstein favored Mach's principle, even using it as a basis of his "cosmological constant" paper, which was his first attempt to fit GR to a cosmic model, but was eventually convinced by the astronomer Wilem de Sitter (see Janssen above) to abandon the principle. In 1932 Einstein adopted the Einstein-de Sitter model that posits a cosmos with a global curvature that asymptotically zeroes out over eternity. The model also can be construed to imply a Big Bang, with its ordered set of accelerations.

A bit of fine-tuning
We can fine-tune the paradox by considering the velocity of the center of mass of the twin system. That velocity is m1v/m1 + m2.

So the CM velocity is larger when the moving mass is the lesser and the converse. Letting x be a real greater than 1 we have two masses xm and m. The algebra reveals there is a factor (x/x+1) > 1/(x+1). The CM velocity for earth moving at 0.6c with respect to a 77kg astronaut is very close to 0.6c. For the converse, however, that velocity is about 2.3 meters per femto-second.

If we like, we can use the equation

E = mc2(1-v2/c2)1/2

to obtain the energies of each twin system.

If the earth is in motion and the astronaut at rest, my calculator won't handle the quantity for the energy. If the astronaut is in motion with the earth at rest, then E = 5.38*1041J.

But the paradox is restored as soon as we set m1 equal to m2.

Notes on the principle of equivalence
Now an aside on the principle of equivalence. Can it be said that gravitational acceleration is equivalent to kinematic acceleration? Gravitational accelerations are all associated with the gravitational constant G and of the form g = Gm/r2. Yet it is easy to write expressions for accelerations that cannot be members of the gravitational set. If a is not constant, we fulfill the criterion. If in rx, x =/= 2, there will be an infinity of accelerations that aren't members of the gravitational set.

At any rate, Einstein's principle of equivalence made a logical connection between a ponderable object's inertial mass and its gravitational mass. Newton had not shown a reason that they should be exactly equal, an assumption validated by acute experiments. (A minor technicality: Einstein and others have wondered why these masses should be exactly equal, but, properly they meant why should they be exactly proportional? Equality is guaranteed by Newton's choice of a gravitational constant. But certainly, min = kmgr, with k equaling one because of Newton's choice.)

Also, GR's field equations rest on the premise (Foundation) that for an infinitesimal region of spacetime, the Minkowskian coordinates of special relativity hold. However, this 1915 assumption is open to challenge on the basis of the Heisenberg uncertainty principle (ca. 1925), which sets a finite limit on the precision of a measurement of a particle's space coordinate given its time coordinate.

Einstein's Kaluza-Klein excursion
In Subtle is the Lord Pais tells of a period in which Einstein took Klein's idea for a five-dimensional spacetime and reworked it. After a great deal of effort, Einstein offered a paper which took Klein's ideas presented as his own, on the basis that he had found a way to rationalize obtaining the five-dimensional effect while sticking to the conventional perceptual view of space and time denoted 3D+T (making one wonder what he thought of his own four-dimensional spacetime scheme).

A perplexed Abraham Pais notes that a colleague dismissed Einstein's work as unoriginal, and Einstein then quickly dropped it7. But reformulation of the ideas of others is exactly what Einstein did in 1905 with the special theory. He presented the mathematical and physical ideas of Lorenz, Fitzgerald and Poincare, whom he very likely read, and refashioned them in a manner that he thought coherent, most famously by rejecting the notion of ether as unnecessary.

Yet it took decades for Einstein to publicly acknowledge the contribution of Poincare, and even then, he let the priority matter remain fuzzy. Poincare's work was published in French in 1904, but went unnoticed by the powerful German-speaking scientific community. As a French-speaking resident of Switzerland, it seems rather plausible that the young patent attorney read Poincare's paper.

But, as Pais pointed out, it was Einstein's interpretation that made him the genius of relativity. And yet, that interpretation was either flawed, or incomplete, as we know from the twin paradox.

Footnotes
Apologies for footnotes being out of order. Haven't time to fix.
1. Einstein's Theory of Relativity by Max Born (Dover 1962).
2. Road to Reality by Roger Penrose (Random House 2006).
3. The Black Hole War by Leonard Susskind (Little Brown 2009).
4. Understanding Einstein's Theories of Relativity by Stan Gibilisco (Dover reprint of the 1983 edition).
7. In his biography of Einstein, Subtle is the Lord (Oxford 1983), physicist Abraham Pais mentions the "clock paradox" in the 1905 Electrodynamics paper but then summarily has Einstein resolve the contradiction in a paper presented to the Prussian Academy of Physics after the correct GR paper of 1915, with Einstein arguing that acceleration ends the paradox, which Pais calls a "misnomer." I don't recall the Prussian Academy paper, but it should be said that Einstein strongly implied the solution to the contradiction in his 1915 GR paper. Later in his book, Pais asserts that sometime after the GR paper, Einstein dispatched a paper on what Pais now calls the "twins paradox" but Pais uncharacteristically gives no citation.
5. Though Dingle seems to have done some astronomical work, he was not -- as a previous draft of this page said -- an astronomer, according to Harry H. Ricker III. Dingle was a professor of physics and natural philosophy at Imperial College before becoming a professor of history and the philosophy of science at City College, London, Ricker said. "Most properly he should be called a physicist and natural philosopher since his objections to relativity arose from his views and interpretations regarding the philosophy of science."
6. Dingle's paper Scientific and Philosophical Implications of the Special Theory of Relativity appeared in 1949 in Albert Einstein: Philosopher-Scientist, edited by Paul Arthur Schilpp. Dingle used this forum to propound a novel extension of special relativity which contained serious logical flaws. Einstein, in a note of response, said Dingle's paper made no sense to him.
8. See for example Max Von Laue's paper in Albert Einstein: Philosopher-Scientist edited by Paul Arthur Schilpp (1949).
This paper was updated on Dec. 10, 2009

Chapter 5
Do dice play God?

A discussion of

Irreligion

by John Allen Paulos.
John Allen Paulos has done a service by compiling the various purported proofs of the existence of a (monotheistic) God and then shooting them down in his book Irreligion: a mathematician explains why the arguments for God just don't add up.

Paulos, a retired Temple University mathematician who writes regularly for the press, would be the first to admit that he has not disproved the existence of God. But, he is quite skeptical of such existence, and I suppose much of the impetus for his book comes from the intelligent design versus accidental evolution controversy [1].

Really, this essay isn't exactly kosher, because I am going to cede most of the ground. My thinking is that if one could use logico-mathematical methods to prove God's existence, this would be tantamount to being able to see God, or to plumb the depths of God. Supposing there is such a God, is he likely to permit his creatures, without special permission, to go so deep?

This essay might also be thought rather unfair because Paulos is writing for the general reader and thus walks a fine line on how much mathematics to use. Still, he is expert at describing the general import of certain mathematical ideas, such as Gregory Chaitin's retooling of Kurt Goedel's undecidability theorem and its application to arguments about what a human can grasp about a "higher power."

Many of Paulos's counterarguments essentially arise from a Laplacian philosophy wherein Newtonian mechanics and statistical randomness rule all and are all. The world of phenomena, of appearances, is everything. There is nothing beyond. As long as we agree with those assumptions, we're liable to agree with Paulos. 

Just because...
Yet a caveat: though mathematics is remarkably effective at describing physical relations, mathematical abstractions are not themselves the essence of being (though even on this point there is a Platonic dispute), but are typically devices used for prediction. The deepest essence of being may well be beyond mathematical or scientific description -- perhaps, in fact, beyond human ken (as Paulos implies, albeit mechanistically, when discussing Chaitin and Goedel) [2].

Paulos's response to the First Cause problem is to question whether postulating a highly complex Creator provides a real solution. All we have done is push back the problem, he is saying. But here we must wonder whether phenomenal, Laplacian reality is all there is. Why shouldn't there be something deeper that doesn't conform to the notion of God as gigantic robot?

But of course it is the concept of randomness that is the nub of Paulos's book, and this concept is at root philosophical, and a rather thorny bit of philosophy it is at that. The topic of randomness certainly has some wrinkles that are worth examining with respect to the intelligent design controversy.

One of Paulos's main points is that merely because some postulated event has a terribly small probability doesn't mean that event hasn't or can't happen. There is a terribly small probability that you will be struck by lightning this year. But every year, someone is nevertheless stricken. Why not you?

In fact, zero probability doesn't mean impossible. Many probability distributions closely follow the normal curve, where each distinct probability is exactly zero, and yet, one assumes that one of these combinations can be chosen (perhaps by resort to the Axiom of Choice). Paulos applies this point to the probabilities for the origin of life, which the astrophysicist Fred Hoyle once likened to the chance of a tornado whipping through a junkyard and leaving a fully assembled jumbo jet in its wake. (Nick Lane in Life Ascending: The Ten Great Inventions of Evolution (W.W. Norton 2009) relates some interesting speculations about life self-organizing around undersea hydrothermal vents. So perhaps the probabilities aren't so remote after all, but, really, we don't know.) 

Shake it up, baby
What is the probability of a specific permutation of heads and tails in say 20 fair coin tosses? This is usually given as 0.520, or about one chance in a million. What is the probability of 18 heads followed by 2 tails? The same, according to one outlook.

Now that probability holds if we take all permutations, shake them up in a hat and then draw one. All permutations in that case are equiprobable [4]. Iintuitively, however, it is hard to accept that 18 heads followed by 2 tails is just as probable as any other ordering. In fact, there are various statistical methods for challenging that idea [5].

One, which is quite useful, is the runs test, which determines the probability that a particular sequence falls within the random area of the related normal curve. A runs test of 18H followed by 2T gives a z score of 3.71, which isn't ridiculously high, but implies that the ordering did not occur randomly with a confidence of 0.999.

Now compare that score with this permutation: HH TTT H TT H TT HH T HH TTT H. A runs test z score gives 0.046, which is very near the normal mean. To recap: the probability of drawing a number with 18 ones (or heads) followed by 2 zeros (or tails) from a hat full of all 20-digit strings is on the order of 10-6. The probability that that sequence is random is on the order of 10-4. For comparison, we can be highly confident the second sequence is, absent further information, random. (I actually took it from irrational root digit strings.)

Again, those permutations with high runs test z scores are considered to be almost certainly non-random [3].

At the risk of flogging a dead horse, let us review Paulos's example of a very well-shuffled deck of ordinary playing cards. The probability of any particular permutation is about one in 1068, as he rightly notes. But suppose we mark each card's face with a number, ordering the deck from 1 to 52. When the well-shuffled deck is turned over one card at a time, we find that the cards come out in exact sequential order. Yes, that might be random luck. Yet the runs test z score is a very large 7.563, which implies effectively 0 probability of randomness as compared to a typical sequence. (We would feel certain that the deck had been ordered by intelligent design.) 

Does not compute
The intelligent design proponents, in my view, are trying to get at this particular point. That is, some probabilities fall, even with a lot of time, into the nonrandom area. I can't say whether they are correct about that view when it comes to the origin of life. But I would comment that when probabilities fall far out in a tail, statisticians will say that the probability of non-random influence is significantly high. They will say this if they are seeking either mechanical bias or human influence. But if human influence is out of the question, and we are not talking about mechanical bias, then some scientists dismiss the non-randomness argument simply because they don't like it.

Another issue raised by Paulos is the fact that some of Stephen Wolfram's cellular automata yield "complex" outputs. (I am currently going through Wolfram's A New Kind of Science (Wolfram Media 2002) carefully, and there are many issues worth discussing, which I'll do, hopefully, at a later date.)

Like mathematician Eric Schechter (see link below), Paulos sees cellular automaton complexity as giving plausibility to the notion that life could have resulted when some molecules knocked together in a certain way. Wolfram's Rule 110 is equivalent to a Universal Turing Machine and this shows that a simple algorithm could yield anycomputer program, Paulos points out.
Paulos might have added that there is a countable infinity of computer programs. Each such program is computed according to the initial conditions of the Rule 110 automaton. Those conditions are the length of the starter cell block and the colors (black or white) of each cell.

So, a relevant issue is, if one feeds a randomly selected initial state into a UTM, what is the probability it will spit out a highly ordered (or complex or non-random) string versus a random string. In other words, what is the probability such a string would emulate some Turing machine? Runs test scores would show the obvious: so-called complex strings will fall way out under a normal curve tail. 

Grammar tool
I have run across quite a few ways of gauging complexity, but, barring an exact molecular approach, it seems to me the concept of a grammatical string is relevant.

Any cell, including the first, may be described as a machine. It transforms energy and does work (as in W = 1/2mv2). Hence it may be described with a series of logic gates. These logic gates can be combined in many ways, but most permutations won't work (the jumbo jet effect).

For example, if we have 8 symbols and a string of length 20, we have 125,970 different arrangements. But how likely is it that a random arrangement will be grammatical?

Let's consider a toy grammar with the symbols a,b,c. Our only grammatical rule is that b may not immediately follow a.

So for the first three steps, abc and cba are illegal and the other four possibilities are legal. This gives a (1/3) probability of error on the first step. In this case, the probability of error at every third step is not independent of the previous probability as can be seen by the permutations:
 abc  bca  acb  bac  cba  cab
That is, for example, bca followed by bac gives an illegal ordering. So the probability of error increases with n.

However, suppose we hold the probability of error at (1/3). In that case the probability of a legal string where n = 30 is less than (2/3)10 = 1.73%. Even if the string can tolerate noise, the error probabilities rise rapidly. Suppose a string of 80 can tolerate 20 percent of its digits wrong. In that case we make our n = 21.333. That is the probability of success is (2/3)21.333 = 0.000175.
And this is a toy model. The actual probabilities for long grammatical strings are found far out under a normal curve tail. 

This is to inform you
A point that arises in such discussions concerns entropy (the tendency toward decrease of order) and the related idea of information, which is sometimes thought of as the surprisal value of a digit string. Sometimes a pattern such as HHHH... is considered to have low information because we can easily calculate the nth value (assuming we are using some algorithm to obtain the string). So the Chaitin-Kolmogorov complexity is low, or that is, the information is low. On the other hand a string that by some measure is effectively random is considered here to be highly informative because the observer has almost no chance of knowing the string in detail in advance.

However, we can also take the opposite tack. Using runs testing, most digit strings (multi-value strings can often be transformed, for test purposes, to bi-value strings) are found under the bulge in the runs test bell curve and represent probable randomness. So it is unsurprising to encounter such a string. It is far more surprising to come across a string with far "too few" or far "too many" runs. These highly ordered strings would then be considered to have high information value.

This distinction may help address Wolfram's attempt to cope with "highly complex" automata. By these, he means those with irregular, randomlike stuctures running through periodic "backgrounds." If a sufficiently long runs test were done on such automata, we would obtain, I suggest, z scores in the high but not outlandish range. The z score would give a gauge of complexity.

We might distinguish complicatedness from complexity by saying that a random-like permutation of our grammatical symbols is merely complicated, but a grammatical permutation, possibly adjusted for noise, is complex. (We see, by the way, that grammatical strings require conditional probabilities.) 

A jungle out there
Paulos's defense of the theory of evolution is precise as far as it goes but does not acknowledge the various controversies on speciation among biologists, paleontologists and others.

Let us look at one of his counterarguments:

The creationist argument goes roughly as follows: "A very long sequence of individually improbable mutations must occur in order for a species or a biological process to evolve. If we assume these are independent events, then the probability that all of them will occur in the right order is the product of their respective probabilities" and hence a speciation probability is miniscule. "This line of argument," says Paulos, "is deeply flawed."

He writes: "Note that there are always a fantastically huge number of evolutionary paths that might be taken by an organism (or a process), but there is only one that actually will be taken. So, if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its having been taken, we will get the miniscule probability that creationists mistakenly attach to the process as a whole."

Though we have dealt with this argument in terms of probability of the original biological cell, we must also consider its application to evolution via mutation. We can consider mutations to follow conditional probabilities. And though a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation and current environment), we must consider the entire chain of mutations represented by an extant species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have for each a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields an overall probability of 1.65 x 10-19. In other words, the more mutations and ancestral species attributed to an extanct species, the less likely that species is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Think of it this way: During an organism's lifetime, there is a fantastically large number of possible mutations. What is the probability that the organism will happen upon one that is beneficial? That event would, if we are talking only about passive natural selection, be found under a probability distribution tail (whether normal, Poisson or other). The probability of even a few useful mutations occurring over 3.5 billion years isn't all that great (though I don't know a good estimate).

A botific vision
Let us, for example, consider Wolfram's cellular automata, which he puts into four qualitative classes of complexity. One of Wolfram's findings is that adding complexity to an already complex system does little or nothing to increase the complexity, though randomized initial conditions might speed the trend toward a random-like output (a fact which, we acknowledge, could be relevant to evolution theory).

Now suppose we take some cellular automata and, at every nth or so step, halt the program and revise the initial conditions slightly or greatly, based on a cell block between cell n and cell n+m. What is the likelihood of increasing complexity to the extent that a Turing machine is devised? Or suppose an automaton is already a Turing machine. What is the probability that it remains one or that a more complex-output Turing machine results from the mutation?

I haven't calculated the probabilities, but I would suppose they are all out under a tail.

In countering the idea that "self-organization" is unlikely, Paulos has elsewhere underscored the importance of Ramsey theory, which has an important role in network theory, . Actually, with sufficient n, "highly organized" networks are very likely [6]. Whether this implies sufficient resources for the self-organization of a machine is another matter. True, high n seem to guarantee such a possibility. But, the n may be too high to be reasonable. 

Darwin on the Lam?
However, it seems passive natural selection has an active accomplice in the extraordinarily subtle genetic machinery. It seems that some form of neo-Lamarckianism is necessary, or at any rate a negative feedback system which tends to damp out minor harmful mutations without ending the lineage altogether (catastrophic mutations usually go nowhere, the offspring most often not getting a chance to mate). 

Matchmaking
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11-11 (3.5x10-15), but we have that our series approximates very closely 1 - e-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

It may be that the in's and out's of evolution arguments were beyond the scope of Irreligion, but I don't think Paulos has entirely refuted the skeptics [7].

Nevertheless, the book is a succinct reference work and deserves a place on one's bookshelf.

Footnotes
1. Paulos finds himself disconcerted by the "overbearing religiosity of so many humorless people." Whenever one upholds an unpopular idea, one can expect all sorts of objections from all sorts of people, not all of them well mannered or well informed. Comes with the territory. Unfortunately, I think this backlash may have blinded him to the many kind, cheerful and non-judgmental Christians and other religious types in his vicinity. Some people, unable to persuade Paulos of God's existence, end the conversation with "I'll pray for you..." I can well imagine that he senses that the pride of the other person is motivating a put-down. Some of these souls might try not letting the left hand know what the right hand is doing.
2. Paulos recounts this amusing fable: The great mathematician Euler was called to court to debate the necessity of God's existence with a well-known atheist. Euler opens with: "Sir, (a + bn)/n = x. Hence, God exists. Reply." Flabbergasted, his mathematically illiterate opponent walked away, speechless. Yet, is this joke as silly as it at first seems? After all, one might say that the mental activity of mathematics is so profound (even if the specific equation is trivial) that the existence of a Great Mind is implied.
3. We should caution that the runs test, which works for n1 and n2, each at least equal to 8 fails for the pattern HH TT HH TT... This failure seems to be an artifact of the runs test assumption that a usual number of runs is about n/2. I suggest that we simply say that the probability of that pattern is less than or equal to H T H T H T..., a pattern whose z score rises rapidly with n. Other patterns such as HHH TTT HHH... also climb away from the randomness area slowly with n. With these cautions, however, the runs test gives striking results.
4. Thanks to John Paulos for pointing out an embarrassing misstatement in a previous draft. I somehow mangled the probabilities during the editing. By the way, my tendency to write flubs when I actually know better is a real problem for me and a reason I need attentive readers to help me out.
5. I also muddled this section. Josh Mitteldorf's sharp eyes forced a rewrite.
6. Paulos in a column writes:
A more profound version of this line of thought can be traced back to British mathematician Frank Ramsey, who proved a strange theorem. It stated that if you have a sufficiently large set of geometric points and every pair of them is connected by either a red line or a green line (but not by both), then no matter how you color the lines, there will always be a large subset of the original set with a special property. Either every pair of the subset's members will be connected by a red line or every pair of the subset's members will be connected by a green line.

If, for example, you want to be certain of having at least three points all connected by red lines or at least three points all connected by green lines, you will need at least six points. (The answer is not as obvious as it may seem, but the proof isn't difficult.) For you to be certain that you will have four points, every pair of which is connected by a red line, or four points, every pair of which is connected by a green line, you will need 18 points, and for you to be certain that there will be five points with this property, you will need -- it's not known exactly - between 43 and 55. With enough points, you will inevitably find unicolored islands of order as big as you want, no matter how you color the lines.

7. Paulos, interestingly, tells of how he lost a great deal of money by an ill-advised enthusiasm for WorldCom stock in A Mathematician Plays the Stock Market (Basic Books, 2003). The expert probabalist and statistician found himself under a delusion which his own background should have fortified him against. (The book, by the way, is full of penetrating insights about probability and the market.) One wonders whether Paulos might also be suffering from another delusion: that probabilities favor atheism.
Relevant pages
The knowledge delusion: a rebuttal of Dawkins
Hilbert's 6th problem and Boolean circuits
Wikipedia article on Chaitin-Kolmogorov complexity
In search of a blind watchmaker
Wikipedia article on runs test
Eric Schechter on Wolfram vs intelligent design
The scientific embrace of atheism (by David Berlinski)
John Allen Paulos's home page
The many worlds of probability, reality and cognition

Chapter 4
The knowledge delusion: A rebuttal of Dawkins


Reflections on The God Delusion,
(Houghton Mifflin 2006) by the evolutionary biologist Richard Dawkins.

Cultural and religious strife sorely trouble today's world. Atheism is becoming ever more fashionable, partly in reaction to the horrors carried out by religious zealots of all stripes. Drone warfare has highlighted the truth of Hannah Arendt's observation of the "banality" of evil, in which ordinary people, perhaps nominally or culturally religious, are able to overcome moral compunctions and carry out heinous acts, having been "authorized" by superiors. And so this criticism of Dawkins's book seems germane.

I do not claim that all Dawkins's contentions are worthless. Plenty of religious people, Martin Luther included, would heartily agree with some of his complaints.

Anyone can agree that vast amounts of cruelty have occurred in the name of god. Yet, it doesn't appear that Dawkins has squarely faced the fact of the genocidal rampages committed under the banner of godlessness (Mao, Pol Pot, Stalin) [1]. What really drives mass violence is of course an important question. As an evolutionary biologist, Dawkins would say that such behavior is a consequence of natural selection, a point underscored by the ingrained propensity of certain simian troops to war on members of the same species. No doubt Dawkins would concede that the bellicosity of those primates had nothing to do with beliefs in some god.

So it seems that Dawkins may be placing too much emphasis on beliefs in god as a source of violent strife, though we should grant that it seems perplexing as to why a god would permit such strife.

Still, it appears that the author of Climbing Mount Improbable (W.W. Norton 1996) has confounded correlation with causation [2].

Our discussion focuses on the first four chapters of Dawkins's book, wherein he makes his case for the remoteness of the probability that a monolithic creator and controller god exists.

Alas, it is already November 2011 [essay revised July 2017], some five years after publication of Dawkins's Philippic.

Such a lag is typical of me, as I prefer to discuss ideas at leisure. This lag isn't quite as outrageous as the timing of my paper on Dawkins's The Blind Watchmaker [4],which I posted about a quarter century after the book first appeared.

I find that I have been quite hard on Dawkins, or, actually, on his reasoning. Even so, I have nothing but high regard for him as a fellow sojourner on spaceship Earth. Doubtless I have been unfair in not highlighting positive passages in Delusion. Despite my desire for objectivity, it is clear that much of the disagreement is rooted in my personal beliefs.

A Bayesian to the core?
The most serious flaw in Delusion is that Dawkins, after denouncing a Bayesian calculation favoring the existence of God, then unwittingly uses his own Bayesian reasoning to uphold his conclusion that it is virtually impossible that God exists.

Dawkins applies probabilistic reasoning to etiological foundations [1a], without defining probability or randomness. He disdains Bayesian subjectivism without realizing that that must be the ground on which he is standing. In fact, nearly everything he writes on probability indicates a severe lack of rigor. This lack of rigor compromises his other points.

Granted, Dawkins pleads that he is no proponent of simplistic "scientism;" yet there is no sign of in Delusion 's first four chapters that in fact he isn't a victim of what might be termed the "scientism delusion." But, as Dawkins does not define "scientism," he has plenty of wiggle room.

From what I can gather, those under the spell of "scientism" hold the, often unstated, assumption that the universe and its components can be understood as an engineering problem, or set of engineering problems. Perhaps there is much left to learn, goes the thinking, but it's all a matter of filling in the engineering details.

Though the notion of a Laplacian clockwork cosmos (the one that has no need of Newton's God to occasionally act to keep things stable) is officially passe, nevertheless many scientists seem to be under the impression that the model basically holds, though needing a bit of tweaking to account for the effects of relativity and of quantum fluctuations. Doubtless Dawkins is correct in his assertion that many American scientists and professionals are closet atheists, with quite a few espousing the "religion" of Einstein, who appreciated the elegance of the phenomenal universe but had no belief in a personal god.

Interestingly, Einstein had a severe difficulty with physical, phenomenal reality, objecting strenuously to the "probabilistic" requirement of quantum physics, famously asserting that "god" (i.e., the cosmos) "does not play dice." He agreed with Erwin Schroedinger that Schroedinger's imagined cat strongly implies the absurdity of "acausal" quantum behavior [3].

It turns out that Einstein was wrong, with statistical experiments in the 1980s demonstrating that "acausality" -- within constraints -- is fundamental to quantum actions. Many physicists have decided to avoid the quantum interpretation minefield, discretion being the better part of valor. Even so, Einstein was correct in his refusal to play down this problem, recognizing that modern science can't easily dispense with classical causality. We speak of energy in terms of vector sums of energy transfers (notice the circularity) but no one has a good handle on what is behind that abstraction.

A partly subjective reality is anethema to someone like Einstein, so disagreeable, in fact, that one can ponder whether the great scientist deep down suspected that such a possibility threatened his reasoning in denying a need for a personal god. Be that as it may, one can understand that a biologist might not be familiar with how nettlesome the quantum interpretation problem really is, but Dawkins has gone beyond his professional remit and taken on the roles of philosopher and etiologist. True, he rejects the label of philosopher, but his basic argument has been borrowed from the atheist philosopher Bertrand Russell.

God as machine?
Dawkins recapitulates Russell thus: "The designer hypothesis immediately raises the question of who designed the designer." Further: "A designer God cannot be used to explain organized complexity because a God capable of designing anything would have to be complex enough to demand the same kind of explanation... God presents an infinite regress from which we cannot escape." Yet does not mechanistic materialism also present that same infinite regress? If the cosmos is a machine, what machine punches the cosmic start button?

Dawkins's a priori assumption is that "anything of sufficient complexity to design anything, comes into existence only as the end product of an extended process of gradual evolution." If there is a great designer, "the designer himself must be the end product of some kind of cumulative escalator or crane, perhaps a version of Darwinism in its own universe."

Dawkins has no truck with the idea that an omnipotent, omniscient (and seemingly paradoxical) god might not be explicable in engineering terms. Even if such a being can't be so described, why is he/she needed? Occam's razor and all that.

Dawkins does not bother with the result of Kurt Goedel and its implications for Hilbert's sixth problem: whether the laws of physics can ever be -- from a human standpoint -- both complete and consistent. Dawkins of course is rather typical of those scientists who pay little heed to that result or who have tried to minimize its importance in physics. A striking exception is the mathematician and physicist Roger Penrose who saw that Goedel's result was profoundly important (though mathematicians have questioned Penrose's specific reasoning).[6]

A way to intuitively think of Goedel's conundrum is via the Gestalt effect: the whole is greater than the sum of its parts. But few of the profound issues of phenomenology make their way into Dawkins's thesis. Had the biologist reflected more on Roger Penrose's The Emperor's New Mind : Concerning Computers, Minds and The Laws of Physics (Oxford 1989) perhaps he would not have plunged in where Penrose so carefully trod.

Penrose has referred to himself, according to a Wikipedia article, as an atheist. In the film A Brief History of Time, the physicist said, "I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along -- it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."

By contrast, we get no such ambiguity or subtlety from Dawkins. Yet, if one deploys one's prestige as a scientist to discuss the underpinnings of reality, more than superficialities are required. The unstated, a priori assumption is, essentially, a Laplacian billiard ball universe and that's it, Jack.

Another atheist who is skeptical of doctrinaire evolutionism is the philosopher Thomas Nagel. Nagel says that though he does not favor theism, the attacks on serious writers with a religious bent are "very unfair." Some of their arguments are potent with respect to the deficiencies of materialism, he writes.

Nagel also asserts that "the original scientific revolution itself, which, because of its built-in restrictions, can't result in a 'theory of everything,' but must be seen as a stage on the way to a more general form of understanding." [6a]

Dawkins embellishes the Russellian rejoinder with the language of probability: What is the probability of a superbeing, capable of listening to millions of prayers simultaneously, existing? This follows his scorning of Stephen D. Unwin's The Probability of God (Crown Forum 2003), which cites Bayesian methods to obtain a reputed high probability of god's existence.

Dawkins avers that he is uninterested in Unwin's subjective prior probabilities, all the while being utterly unaware that his own probability assessment is altogether subjective, and largely a priori. Heedless of the philosophical underpinnings of probability theory, the outspoken atheist doesn't realize that by assigning a probability of "remote" at the extremes of etiology, he is engaging in a subtle form of circular reasoning.

The reader deserves more than an easy putdown of Unwin in any discussion of probabilities. Dawkins doesn't acknowledge that Bayesian statistics is a thriving school of research that seeks to find ways to as much as possible "objectify" the subjective assessments of knowledgeable persons. There has been strong controversy concerning Bayesian versus classical statistics, and there is a reason for that controversy: it gets at foundational matters of etiology. Nothing on this from Dawkins.

Without a Bayesian approach, Dawkins is left with a frequency interpretation of probability (law of large numbers and so forth). But we have very little -- in fact I would say zero -- information about the existence or non-existence of a sequence of all powerful gods or pre-cosmoses. Hence, there are no frequencies to analyze. Hence, use of a non-Bayesian probability argument is in vain. Yet, if we interpret Dawkins's probabilities in a Bayesian sense, we still end up in a foundational quagmire.

Dawkins elsewhere says that he has read the pioneering statistician Ronald Fisher, but one wonders whether Dawkins appreciates the meaning of statistical analysis. Fisher, who also opposed the use of Bayesian premises, is no solace when it comes to frequency-based probabilities. Take Fisher's combined probability test, a technique for data fusion or "meta-analysis" (analysis of analyses): What are the several different tests of probability that might be combined to assess the probability of god? [14]

A random lapse of reason
Dawkins is quick to brush off William A. Dembski, the intelligent design advocate who uses statistical methods to argue that the probability is cosmically remote that life originated in a random manner. And yet Dawkins himself seems to have little or no grasp of the basis of probabilities.

In fact, Dawkins makes no attempt to define randomness, a definition routinely brushed off in elementary statistics texts but which represents quite a lapse when getting at etiological foundations But, to reiterate, the issue goes yet deeper. If, at the extremes, causation is not nearly so clear-cut as one might naively imagine, then at those extremes probabilistic estimates may well be inappropriate.

Curiously, Russell discovered what is now known Russell's paradox, which was ousted from set theory by fiat (axiom). Then along came Goedel who proved that neither axiomatic set theory nor the theory of logical types propounded by Russell and Alfred North Whitehead in their Principia Mathematica could be both complete and consistent. Russell was chagrined to learn that, despite years of labor, there was no way to be sure the Principia system was consistent. Goedel's result makes a mockery of the fond illusion of the universe as giant computerized robot. But it doesn't take Goedel's theorem to shatter that illusion. One need only ask: How does a robot plan for and build itself? Algorithmically, it is impossible. Dawkins handles this conundrum, it seems, by confounding the "great explanatory power" of natural selection -- wherein lifeform robots are controlled by robotic DNA (selfish genes) -- with the origin of the cosmos.

But the biologist, so focused on this particular cosmic question, manages to avert his eyes from the Goedelian "frame problem" [6b]. And yet even atheistic physicists sense that the cosmos isn't simplistically causal when they describe the overarching reality as a "spacetime block." In other words, we humans are faced with some higher or other reality -- a transcendent "force" -- in which we operate and which, using standard mathematical logic, is not fully describable. This point is important. Technically, perhaps, we might add an axiom so that we can "describe" this transcendent (topological?) entity, but that just pushes the problem back and we would then need another axiom to get at the next higher entity.

Otherwise, Dawkins's idea that this higher dimensional "force" or entity should be constructed faces the Goedelian problem that such construction would evidently imply a Turing algorithm, which, if we want completeness and consistency, requires an infinite regress of axioms. In light of discoveries on the limits of knowledge, as spelled out by Goedel and Turing, Dawkins's argument doesn't quite work. This entity is perforce beyond human ken. One may say that it can hardly be expected that a biologist would be familiar with such arcana of logic and philosophy. But then said biologist should beware superficial approaches to foundational matters.

At this juncture, you may be thinking: "Well, that's all very well, but that doesn't prove the existence of god." But here is the issue: One may say that this higher reality or "power" or entity is lifeless "stuff" (if it is energy, it is some kind of unknown ultra-energy) or is a superbeing, a god of some sort. Whatever we call this transcendent entity is inherently unknowable in terms of ordinary logic, so beloved of devotees of the "scientific method." So then the best someone in Dawkins's shoes might say is that there is a 50/50 chance that the entity is intelligent. (See Appendix below for further discussion of a priori probability.)

A probability estimate's job is to mask out variables on the assumption that with enough trials these unknowns tend to cancel out. Implicitly, then, one is assuming that a god has decided not to influence the outcome. At one time, in fact, men drew lots in order to let god decide an outcome. (Some still see gambling as sinful as it dishonors god and enthrones Lady Randomness.)

Curiously, Dawkins pans the "argument from incredulity" proffered by some anti-Darwinians but his clearly-its-absurdly-improbable case against a higher intelligence is in fact an argument from incredulity, being based on his subjective expert estimate.

Dawkins's underlying assumption is that mechanistic hypotheses of causality are valid at the extremes, an assumption common to modern naive rationalism.

I think, therefore you're deluded
Another important oversight concerns the biologist's Dawkins-centrism. As in the attitude: "Your reality, if too different from mine, is quite likely to be delusional. My reality is obviously logically correct, as anyone can plainly see." This attitude is quite interesting in that he very effectively gives some important information about how the brain constructs reality and how easily people might suffer from delusions, such as being convinced that they are in regular communication with god.

True, Dawkins jokingly mentions one thinker who posits a virtual reality for humanity and notes that he can see no way to disprove such a scenario. But plainly Dawkins rejects the possibility that his perception and belief system, with its particular limits, might be delusional. And, of course, one can never prove, via some formal logic, that one's own system of thinking and interpretation of reality aren't delusional.

A mitigating circumstance that we must concede on Dawkins's behalf is that the full ramifications of quantum puzzlements have yet to sink into the scientific establishment, which -- aside from a distaste for the implication that like Wile E. Coyote they are standing on thin air -- has a legitimate fear of being overrun by New Agers, occultists and flying saucer buffs. Yet, by skirting this matter, Dawkins does not address the greatest etiological conundrum of the 20th century which, one would think, might well have major implications in the existence-of-god controversy.[5]

Dawkins is also rather cavalier about probabilities concerning the origin of life, attacking the late Fred Hoyle's "jumbo jet" analogy without coming to grips with what was bothering Hoyle and without even mentioning that scientists of the caliber of Francis Crick and Joshua Lederberg were troubled by origin-of-life probabilities long before Michael J. Behe and Dembski touted the intelligent design hypothesis.

Astrophysicist Hoyle, whose steady state theory of the universe was eventually trumped by George Gamow's big bang theory, said on several occasions that the probability of life assembling itself from some primordial ooze was equivalent to the probability that a tornado churning through a junkyard would leave a fully functioning Boeing 747 in its wake. Hoyle's atheism was shaken by this and other improbabilities, spurring him toward various panspermia (terrestrial life began elsewhere) conjectures. In the scenarios outlined by Hoyle and Chandra Wickramasinghe, microbial life or proto-life wafted down through the atmosphere from outer space, perhaps coming from "organic" interstellar dust or from comets. [11]

One scenario had viruses every now and again floating down from space and, besides setting off the occasional pandemic, enriching the genetic structure of life on earth in such a way as to account for increasing complexity. Hoyle was not specifically arguing against natural selection, but was concerned about what he saw as statistical troubles with the process. (He wasn't the only one worried about that; there is a long tradition of scientists trying to come up with ways to make mutation theory properly synthesize with Darwinism.)

Dawkins laughs off Hoyle's puzzlement about mutational probabilities without any discussion of the reasons for Hoyle's skepticism or the proposed solutions.

There are various ideas about why natural selection is robust enough to, thus far, prevent life from petering out. I realize that Dawkins may have felt that he had dealt with that subject elsewhere, but his four-chapter thesis omits too much. A longer, more thoughtful discussion -- after the fashion of Penrose's The Emperor's New Mind -- is, I would say, called for when heading into such deep waters.

Hoyle's qualms, of course, were quite unwelcome in some quarters and may have resulted in the Nobel prize committee bypassing him. And yet, though the space virus idea isn't held in much esteem, panspermia is no longer considered a disrespectable notion, especially as more and more extrasolar planets are identified. Hoyle's use of panspermia conjectures was meant to account for the probability issues he saw associated with the origin and continuation of life. (Just because life originates does not imply that it is resilient enough not to peter out after X generations.)

Spaced-out biologists
Hoyle, in his own way, was deploying panspermia hypotheses in order to deal with a form of the anthropic principle. If life originated as a prebiotic substance found across wide swaths of space, probabilities might become reasonable. It was the Nobelist Joshua Lederberg who made the acute observation that interstellar dust particles were about the size of organic molecules. Though this correlation has not panned out, that doesn't make Hoyle a nitwit for following up.

In fact, Lederberg was converted to the panspermia hypothesis by yet another atheist (and Marxist), J.B.S. Haldane, a statistician who was one of the chief architects of the "modern synthesis" merging Mendelism with Darwinism.

No word on any of this from Dawkins, who dispatches Hoyle with a parting shot that Hoyle (one can hear the implied chortle) believed that archaeopteryx was a forgery, after the manner of Piltdown man. The biologist declines to tell his readers about the background of that controversy and the fact that Hoyle and a group of noted scientists reached this conclusion after careful examination of the fossil evidence. Even though Hoyle and his colleagues were deemed to have erred, the fact remains that he undertook a serious scientific investigation of the matter.

Another committed atheist, Francis Crick [12], co-discoverer of the doubly helical structure of DNA, was even wilder than Hoyle in proposing a panspermia idea in order to account for probability issues. He suggested in a 1970s paper and in his book Life ItselfIts Origin and Nature (Simon & Schuster 1981) that an alien civilization had sent microbial life via rocketship to Earth in its long-ago past, perhaps as part of a program of seeding the galaxy. Why did the physicist-turned-biologist propose such a scenario? Because the amino acids found in all lifeforms are left-handed; somehow none of the mirror-image right-handed compounds survived, if they were ever incorporated at all. That discovery seemed staggeringly unlikely to Crick.

I don't bring this up to argue with Crick, but to underscore that Dawkins plays Quick-Draw McGraw with serious people without discussing the context. I.e., his book comes across as propagandistic, rather than fair-minded. It might be contrasted with John Allen Paulos's book Irreligion -- discussed in my essay Do dice play God? -- which tries to play fair and which doesn't make elementary logico-mathematical blunders Though Crick and Hoyle were outliers in modern panspermia conjecturing, the concept is respectable enough for NASA to take seriously.

The cheap shot method can be seen in how Dawkins deals with Carl Jung's claim of an inner knowledge of god's existence. Jung's assertion is derided with a snappy one-liner that Jung also believed that objects on his bookshelf could explode spontaneously. [10] That takes care of Jung! -- irrespective of the many brilliant insights contained in his writings, however controversial. (Disclaimer: I am neither a Jungian nor a New Ager, nor, I should add, a panspermia buff.)

Granted that Jung was talking about what he took to be a paranormal event and granted that Jung is an easy target for statistically minded mechanists and granted that Jung seems to have made his share of missteps, we make three points:
1. There was always the possibility that the explosion occurred as a result of some anomalous, but natural event.
2. A parade of distinguished British scientists have expressed strong interest in paranormal matters, among them officers of paranormal study societies. The American Brian Josephson, who received a Nobel prize for the quantum physics behind the Josephson junction, speaks up for the reality of mental telepathy (for which he has been ostracized by the "billiard ball" school of scientists).
3. If Dawkins is trying to debunk the supernatural using logical analysis, then it is not legitimate to use someone's claim of a supernatural event to discredit his belief in the supernatural. (All gods are required to sit still and keep quiet.)
Probability One?
The biologist contends with the origin-of-life issue by invoking the anthropic principle and the principle of mediocrity, along with a verbal variant of Drake's equation.

The mediocrity principle says that astronomical evidence shows that we live on a random speck of dust on a random dustball blowing around in a (random?) mega dust storm.

The anthropic principle says that, if there is nothing special about Earth, isn't it interesting how Earth travels about the sun in a "Goldilocks zone" ideally suited for carbon based life and how the planetary dynamics, such as tectonic shift, seem to be just what is needed for life to thrive -- as discussed in the book Rare Earth: Why Complex Life is Uncommon in the Universe by Peter D. Ward and Donald Brownlee (Springer Verlag 2000)? Even further, isn't it amazing that the seemingly arbitrary constants of nature are so exactly calibrated as to permit life to exist, as a slight difference in the index of those constants known as the fine structure constant would forbid galaxies from ever forming? This all seems outrageously fortuitous.

Let us look at Dawkins's response.

Suppose, he says, that the probability of life originating on Earth is a billion to one or even a billion billion to one (10-9 and 10-18). If there are that many Earth-like planets in the cosmos, the probability is virtually one that life will arise spontaneously. We just happen to be the lucky winner of the cosmic lottery. (You gotta be in it, to win it.)

It should be noted that Crick points out that, in considering the Drake estimate, we can only include the older sectors of the cosmos, in which heavy metals have had time to coalesce from the gases left over from supernovae -- i.e., second generation stars and planets (by the way, Hoyle was the originator of the solution to the heavy metals problem). Yet still, we may concede that there may be enough para-Earths to answer the probabilities posed by Dawkins.

Though careful to say that he is no expert on the origin of life, Dawkins's probabilities, even if given for the sake of argument, are simply Bayesian "expert estimates." But, it is quite conceivable that those probabilities are far too high (though I candidly concede it is very difficult to assign any probability or probability distribution to this matter). Consider that unicellular life, with the genes on the DNA (or RNA) acting as the "brain," exploits proteins as the cellular workhorses in a great many ways. We know that sometimes several different proteins can fill the same job, but that caveat doesn't much help what could be a mind-boggling probability issue.
In fact, the chemist Robert Shapiro had strong reservations about the purported RNA origins of life, arguing in Origins: A Skeptic's Guide to the Creation of Life on Earth (Summit Books 1986) that the probabilities were prohibitive, although he promoted a prebiotic theory along the scaffolding line. Neither was he impressed with Dawkins's probability estimate in this regard.

The protein folding question
Suppose that, in some primordial ooze or on some undersea volcanic slope, a prebiotic form has fallen together chemically and, in order to cross the threshold to life form, requires one more protein to activate. A protein is the molecule that takes on a specific shape, carrying specific electrochemical properties, after amino acids fold up. Protein molecules fit into each other and other constituents of life like lock and key (though on occasion more than one key fits the same lock). The amino acids used by terrestrial life can, it turns out, be shuffled in many different ways to yield many different proteins. How many ways? About 1060, which exceeds the number of stars in the observable universe by 24 orders of magnitude! And the probability of such a spark-of-life event might be in that ball park. If one considers the predecessor protein link-ups as independent events and multiplies those probabilities, we would come up with numbers even more absurd.

But, Dawkins has a way out, though he loses the thread here. His way out is that a number of physicists have posited, for various reasons, some immense -- even infinite -- number of "parallel" universes, which have no or very weak contact with the one we inhabit and are hence undetectable. This could handily account for our universe having the Goldilocks fine structure constant and, though he doesn't specify this, might well provide enough suns in those universes that have galaxies to account for even immensely improbable events.

I say Dawkins loses the thread because he scoffs at religious people who see the anthropic probabilities as favoring their position concerning god's existence without, he says, realizing that the anthropic principle is meant to remove god from the picture. What Dawkins himself doesn't realize is that he mixes apples and oranges here. The anthropic issue raises a disturbing question, which some religious people see as in their favor. Some scientists then seize on the possibility of a "multiverse" in order to cope with that issue.

But now what about Occam's razor? Well, says Dawkins, that principle doesn't quite work here. To paraphrase Sherlock Holmes, once one removes all reasonable explanations the remaining explanation, no matter how absurd, must be correct.

And yet what is Dawkins's basis for the proposition that a host of undetectable universes is more probable than some intelligent higher power? There's the rub. He is, no doubt unwittingly, making an a priori assumption that any "natural" explanation is more reasonable than a supernatural "explanation." Probabilities really have nothing to do with his assumption. So, ladies and gentlemen, in the ring we have the Absurd Self-Forming Multiverse versus the Absurd God of Creation...

But perhaps we have labored in vain over the "multiverse" argument, for at one point we are told that a "God capable of calculating the Goldilocks values" of nature's constants would have to be "at least as improbable" as the finely tuned constants of nature, "and that's very improbable indeed." So at bottom, all we have is a Bayesian expert prior estimate.

Well, say you, perhaps a Wolfram-style algorithmic complexity argument can save the day. Such an argument might be applicable to biological natural selection, granted. But what selected natural selection? A general Turing machine can compute anything computable, including numerous "highly complex" outputs programed by easy-to-write inputs. But what probability does one assign to a general Turing machine spontaneously arising, say, in some electronic computer network? Wolfram found that "interesting" celullar automata were rare. Even rarer would be a complex cellular automaton that accidentally emerged from random inputs.

I don't say that such a scenario is impossible, but rather to assume that it just must be so is little more than hand-waving.

In fact, we must be very cautious about how we use probabilities concerning emergence of high-information systems. Here is why: A sufficiently rich mix of chemical compounds may well form a negative feedback dynamical system. It would then be tempting to apply a normal probability distribution to such a system, and that distribution very well may yield reasonable results for a while. BUT, if the dynamical system is non-linear -- which most are -- the system could reach a threshold, akin to a chaos point, at which it crosses over into a positive feedback system or into a substantially different negative feedback system.

The closer the system draws to that tipping point, the less the normal distribution applies. In the chaos zone, normal probabilities are generally worthless. Hence to say that thus and such an outcome is highly improbable based on the previous state of the system is to misunderstand how non-linearities can work. This point, it should be conceded, might be a bit too abstruse for many of Dawkins's readers.

Natural selection as universal elixir
Dawkins tackles the problem of the outrageously high information values associated with complex life forms by conceding that a species, disconnected from information about causality, has only a remote probability of occurrence by random chance. But, he counters, there is in fact a non-random process at work: natural selection.

I suppose he would regard it a quibble if one were to mention that mutations occur randomly, and perhaps so it is. However, it is not quibbling to question how the powerful process of natural selection first appeared on the scene. In other words, the information values associated with the simplest known form (least number of genes) of microbial life is many orders of magnitude greater than the information values associated with background chemicals -- which was Hoyle's point in making the jumbo jet analogy.

And then there is the probability of life thriving. Just because it emerges, there is no guarantee that it would be robust enough not to peter out in a few generations Dawkins dispenses with proponents of intelligent design, such as biologist Michael J. Behe, author of Darwin’s Black Box: The Biochemical Challenge to Evolution (The Free Press 1996), by resort to the conjecture that a system may exist after its "scaffolding" has vanished. This conjecture is fair, but, at this point, the nature of the scaffolding, if any, is unknown. Dawkins can't give a hint of the scaffolding's constituents because, thus far, no widely accepted hypothesis has emerged. Natural selection is a consequence of an acutely complex mechanism. The "scaffolding" is indeed a "black box" (it's there, we are told, but no one can say what's inside).

Though it cannot be said that intelligent design advocate Behe has proved "irreducible complexity," the fact is that the magnitude of organic complexity has even prompted atheist scientists to look far afield for plausible explanations, as noted above. But there is something awe-inspiring about natural selection, Dawkins avers. Biologists, he writes, have had their consciousnesses raised by natural selection's "power to tame improbability." An obvious objection is that that power has very little to do with the issues of the origins of life or of the universe and hence does not bolster his case against god.

And what shall we make of Dawkins's conjecture that our universe emerged as part of process that I dub meta-evolution? [9] I suppose that if one waxes mystical about natural selection -- making it a mysterious, ultra-abstract principle, then perhaps Dawkins makes sense. Otherwise, he is amazingly naive. The philosopher W.T. Stace once commented:
There is a wide-spread prejudice against religion among intellectuals in the modern world generally... For instance, why do the majority of psychologists brush aside, and refuse to take seriously, the claims of parapsychological propositions to acceptance? Not merely because parapsychological studies have often been pursued by cranks or even quacks. There is a deeper reason. They are animated by a fear that to admit many of the abnormal phenomena alleged by the parapsychologists might open the door again to non-materialistic beliefs in something like that outmoded entity "soul," to a belief in its survival of death, to what is sometimes called a more "spiritual" view of the world, and through these avenues to the revival of religious beliefs from which they think modern intelligent men have at last emancipated themselves after age-long struggles against the forces of superstition.[13]

Appendix


Please see my essay The many worlds of probability, reality and randomness
https://randompaulr.blogspot.com/2013/11/the-many-worlds-of-probability-reality_6543.html

Beware independent event scenarios
It should, by the way, be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11-11 (3.5x10-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

These minor points tend to some extent to bolster Dawkins's claim about natural selection "taming" probability (bad logic, good metaphor). Natural selection is not altogether grounded in independent events. How much this has to do with God's existence is an open question.

On a priori estimates
Now let us digress a bit concerning the controversy [7] over Bayesian inference, which is essentially about how one deploys an a priori probability.

If confronted with an urn about which we know only that it contains some black balls and some white ones and, for some reason, we are compelled to wager whether an initial draw yields a black ball, we might agree that our optimal strategy is to assign a probability of success of 1/2. In fact, we might well agree that -- barring resort to intuition or appeal to a higher power -- this is our only strategy. Of course, we might include the cost aspect in our calculation. A classic example is Pascal's wager on the nonexistence of god. Suppose, given a probability of say 1/2, one is wrong?

Now suppose we observe say 30 draws, with replacement, which we break down into three trials of 10 draws each. In each trial, the ratio is about 2/3 blacks to whites. Three trials isn't many, but is perhaps enough to convince us that the population proportion is close to 2 to 3. We have used frequency analysis to estimate that the independent probability of choosing a black ball is close to 2/3. That is, we have used experience to revise our probability estimate, using "frequentist" reasoning. What is the difference between three trials end-to-end and one trial? This question is central to the Bayesian controversy. Is there a difference in three simultaneous trials of 10 draws each and three run consecutively? These are slippery philosophical points that won't detain us here.

But we need be clear on what the goal is. Are we using an a priori initial probability that influences subsequent probabilities? Or, are we trying to detect bias (including neutral bias of 1/2) based on accumulated evidence?

For example, suppose we skip the direct proportions approach just cited and use, for the case of replacement, the Bayesian conditional probability formula, assigning an a priori probability of b to event B of a black ball withdrawal. That is,

p(B | B) = p(B & B)/p(B). Or, that is,

p(b | b) = p(b | b)p(b)/p(b) = b2. For five black balls in succession, we get b5.

Yes, quite true that we have the case in which the Bayesian formula collapses to the simple multiplication rule for independent events. But our point is that if we apply the Bayesian formula differently to essentially the same scenario, we get a different result, as the following example shows.

Suppose the urn has a finite number N of black and white balls in unknown proportion and suppose n black balls are drawn consecutively from the urn. What is the probability the next ball will be black? According to the Bayesian formula -- applied differently than as above -- the probability is (n+1)/(n+2).

Let N = the total number of balls drawn and to be drawn and n = those that have been drawn, with replacement. Sn is the run of consecutive draws observed as black. SN is the total number of black draws possible, those done and those yet to be done. What is the probability that all draws will yield black given a run of Sn black? That is

That is, we ask what is

p[SN = N | Sn = n]?

But this is just

p[SN = N and Sn = n]/p[Sn = n]

or (1/N+1)/(1/n+1) = (n+1)/(N+1). If N = n+1, we obtain (n+1)/(n+2).

C.D. Broad, in his derivation for the finite case, according to S.L. Zabell , reasoned that all ratios j/n are equally likely and discovered that the result is not dependent on N, the population size, but only on the sample size n. Bayes' formula is applied as a recursive summation of factorials, eventually leading to (n+1)/(n+2).

This result was also derived for the infinite case by Laplace and is known as the rule of succession. Laplace's formula, as given by Zabell, is, over the interval [0,1], a ratio of integrals:
∫ [p(r+1)(1-p)(m-r) dp]
_____________________________

     ∫ [pr(1-p)(m-r) dp]

= (r+1)/(m+1)

Laplace's rule of succession contrasts with that of Thomas Bayes, as reported by his intellectual executor Richard Price. Bayes had considered the case where nothing is known concerning a potential event prior to any relevant trials. Bayes' idea is that all probabilities would then be equally likely.

Given this assumption and told that a black ball has been pulled from an urn n times in unfailing succession, it can be seen that

P = (n+1) ∫pn dp = bn+1 - an+1

In Zabell , this is known as Price's rule of succession. We see that this rule of succession of course might (it's a stretch) be of some value in estimating the probability that the sun will rise tomorrow but is worthless in estimating the probability of god's existence.

To recapitulate: If we know there are N black and white balls within and draw, with replacement, n black balls consecutively, there are N-n possible proportions. So one may say that, absent other information, the probability that any particular ratio is correct is 1/(N-n). That is, the distribution of the potential frequencies is uniform on grounds that each frequency is equiprobable.

So this is like asking what is the probability of the probability, a stylization some dislike. So in the finite and infinite cases, a uniform probability distribution seems to be assumed, an assumption that can be controversial -- though in the case of the urn equiprobability has a justification. I am not quite certain that there necessarily is so little information available that equiprobability is the best strategy, as I touch on in "Caution A" below.

Another point is that, once enough evidence from sampling the urn is at hand, we should decide -- using some Bayesian method perhaps -- to test various probability distributions to see how well each fits the data.

Caution A: Consider four draws in succession, all black. If we assume a probability of 1/2, the result is 0.54 = 0.0625, which is above the usual 5% level of significance. So are we correct in conjecturing a bias? For low numbers, the effects of random influences would seem to preclude hazarding a probability of much in excess of 1/2. For 0.55 = 0.03152, we might be correct to suspect bias. For the range n=5 to n=19, I suggest that the correct proportion is likely to be found between 1/2 and 3/4 and that we might use the mean of 0.625. [worthwhile would be discussion of an estimation for n >.= 20 when we do not accept the notion that all ratios are equiprobable].

Caution B: Another issue is applying the rule of succession to a system in which perhaps too much is unknown. The challenge of Hume as to the probability of the sun rising tomorrow was answered by Laplace with a calculation based on the presumed number of days that the sun had already risen. The calculation generated much derision and did much to damage the Bayesian approach. (However, computer-enhanced Bayesian methods these days enjoy wide acceptance in certain disciplines.)

An issue that arises here is the inherent stability of a particular system. An urn has one of a set of ratios of white to black balls. But, a nonlinear dynamic system is problematic for modeling by an urn. Probabilities apply well to uniform, which is to say, for practical purposes, periodic systems. However, quasi-periodic systems may well give a false sense of security, perhaps masking sudden jolts into atypical, possibly chaotic, behavior. Wasn't everyone marrying and giving in marriage and conducting life as usual when in 2004 a tsunami killed 230,000 people in 14 countries bordering the Indian Ocean? (Interestingly, however, Augustus De Morgan proposed a Bayesian-style formula for the probability of the sudden emergence of something utterly unknown, such as a new species.)[8]

That said, one can nevertheless imagine a group of experts, each of whom gives a probability estimate to some event, and taking the average (perhaps weighted via degree of expertise) and arriving at a fairly useful approximate probability. In fact, one can imagine an experiment in which such expert opinion is tested against a frequency model (the event would have to be subject to frequency analysis, of course).

We might go further and say that it is quite plausible that a person well informed about a particular topic might give a viable upper or lower bound probability for a particular set of events, though not knowledgeable about precise frequencies. For example, if I notice that the word "inexorable" has appeared at least once per volume in 16 of the last 20 books I have read, I can reason that, based on previous reading experience, the probability that that particular word would appear in a book is certainly less than 10%. Hence, I can say that the probability of randomness rather than tampering by some capricious entity is, using combinatorial methods, less than one in 5 billion. True, I do not have an exact input value. But my upper bound probability is good enough.

We consider the subjectivist vs. objectivist conceptions of probability as follows:

Probability Type I is about degree of belief or uncertainty. Two pertinent questions about P1:
1. How much belief does a person have that an event will happen within some time interval?
2. How much belief does a person have that an event that has occurred did so under the conditions given? Degree of belief may be given, for example, as an integer on a scale from 0 to 10, which, as it happens can be pictured as a pie cut into 10 wedges, or percentages given in tenths of 100. When a person is being fully subjective ("guesstimating," to use a convenient barbarism), one tends to focus on easily visualizable pie portions, such as tenths.
The fact that a subjective assessment can be numbered on a scale leads easily to ratios. That is, if we are "four pie wedges" out of seven wedges sure, we have the ratio 4/7. In a frequency interpretation, 4/7 assurance would mean that a number of trials have been run and that a trend has been spotted of four outcomes in seven being positive. Note that the frequency interpretation is in fact a way of arriving at belief; the Bayesian viewpoint lets one use whatever one likes as a means of obtaining an initial confidence level.

Of course, such ratios aren't really any better than choosing a number between 0 and 10 for one's degree of belief. This is one reason why such subjective ratios are often criticized as of no import.

Probability Type II then purports to demonstrate an objective method of assigning numbers to one's degree of belief. The argument is that a thoughtful person will agree that what one doesn't know is often modelable as a mixture which contains an amount q and an amount p of something or other -- that is, the urn model. If one assumes that the mixture stays constant for a specified time, then one is entitled to use statistical methods to arrive at some number close to the true ratio. Such ratios are construed to mirror objective reality and so give a plausible reason for one's degree of belief, which can be acutely quantified, permitting tiny values.

P2 requires a classical, essentially mechanist view of phenomenal reality, an assumption that is open to challenge, though there seems little doubt that stochastic studies are good predictors for everyday affairs (though this assertion also is open to question).


Apologies for the odd footnote numbering. Use of "control f" will be helpful.

1. In a serious lapse, Dawkins has that "there is something to be said" for treating Buddhism and Confucianism not as religions but as ethical systems. In the case of Buddhism, it may be granted that Buddhism is atheistic in the sense of denying a personal, monolithic god. But, from the perspective of a materialist like Dawkins, Buddhism certainly purveys numerous supernaturalistic ideas, with followers espousing ethical beliefs rooted in a supernatural cosmic order -- which one would think qualifies Buddhism as a religion.

True, Dawkins's chief target is the all-powerful god of Judaism, Christianity and Islam (Zoroastrianism too), with little focus on pantheism, hentheism or supernatural atheism. Yet a scientist of his standing ought be held to an exacting standard.

1a. Dawkins also demonstrates little grasp of other relevant philosophical categories: ontology, epistemology, metaphysics and, of course, ethics. Clearly, one is entitled to be a thinker without being a student of philosophy. Yet if one wishes to overthrow entire thought systems, a nodding acquaintance with such matters is to be preferred.

2. I have made more than my share of logico-mathematical blunders, as my mathematician son will attest. Some remain online; having lost control of the pages, I am unable to remove the content.

3. As well as conclusively proving that quantum effects can be scaled up to the "macro world."

4.The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986).

5. A fine, but significant, point: Dawkins, along with many others, believes that Zeno's chief paradox has been resolved by the mathematics of bounded infinite series. Quantum physics, however, requires that potential energy be quantized. So height H above ground is measurable discontinuously in a finite number of lower heights. So a rock dropped from H to ground must first reach H', the next discrete height down. How does the rock in static state A at H reach static state B at H'? That question has no answer, other than to say something like "a quantum jump occurs." So Zeno makes a sly comeback.

This little point is significant because it gets down to the fundamentals of causality, something that Dawkins leaves unexamined.

6. After the triumphs of his famous theorems, Goedel stirred up more trouble by finding a solution to Einstein's general relativity field equations which, in Goedel's estimation, demonstrated that time (and hence naive causality) is an illusion. A rotating universe, he found, could contain closed time loops such that if a rocket traveled far enough into space it would eventually reach its own past, apparently looping through spacetime forever. Einstein dismissed his friend's solution as inconsistent with physical reality.

Before agreeing with Einstein that the solution is preposterous, consider the fact that many physicists believe that there is a huge number of "parallel," though undetectable, universes.

And we can leave the door ajar, ever so slightly, to Dawkins's thought of a higher, but non-deistic, program fashioning the universe in what I dub a process of "meta-evolution." Suppose that far in our future an advanced race builds a spaceship bearing a machine that resets the constants of nature as it travels, thus establishing the conditions for the upcoming big bang in our past such that galaxies, and we, are formed. Of course, we then are faced with the question: where did the information come from?

6a. Mind & Cosmos: Why the materialist neo-Darwinian conception of nature is almost certainly wrong (Oxford 2012) by Thomas Nagel.

6b. Not the artificial intelligence frame problem formulated by Marvin Minsky.

7. An excellent discussion of this controversy is found in Symmetry and Its Discontents (Cambridge 2005) by S.L. Zabell. Interesting books on Bayesianism: Interpreting Probability (Cambridge 2002) by David Howie; The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Yale 2011) by Sharon Bertsch McGrayne.

8. S.L. Zabell offers a proof of De Morgan's formula.

9. Consider a child born with super-potent intelligence and strength. What are the probabilities that the traits continue?
A. If the child matures and mates successfully, the positive selection pressure from one generation to the next is faced with a countervailing tendency toward dilution. It could take many, many generations before that trait (gene set) becomes dominant, and in the meantime, especially in the earlier generations, extinction of the trait is a distinct possibility.
B. In social animals, very powerful individual advantages come linked to a very powerful disadvantage: the tendency of the group to reject as alien anything too different. Think of the early 19th century practice of Australian tribesmen to kill mixed race offspring born to their women. Such first-generation offspring tend to be unusually fit, brimming with, in Darwin's words, "hybrid vigor." t
10. In another example of Dawkins's dismissive attitude toward fellow scientists, Dawkins writes:
Paul Davies's The Mind of God seems to hover somewhere between Einsteinian pantheism and an obscure form of deism -- for which he was rewarded with the Templeton Prize (a very large sum of money given annually by the Templeton Foundation, usually to a scientist who is prepared to say something nice about religion.)"
Dawkins goes on to upbraid scientists for taking Templeton money on grounds that they are in danger of introducing bias into their statements.

I have not read The Mind of God: The Scientific Basis for a Rational World (Simon & Schuster 1992), so I cannot comment on its content. On the other hand, it would appear that Dawkins has not read Davies's The Fifth Miracle: the search for the origins and meaning of life (Simon & Schuster 1999), or he might have been a bit more prudent.

Fifth Miracle is, as is usual with Davies, a highly informed tour de force. I have read several books by Davies, a physicist, and have never caught him in duffer errors of the type found in Dawkins's books.

11. The chemist Robert Shapiro didn't find Hoyle's panspermia work to be first rate, but I have the sense that that assessment may have something to do with the strong conservativism of chemists versus the tradition of informed speculation by astrophysicists. Some of Shapiro's complaints could also be lodged against string theorists. Another skeptic was the biologist Lynn Margolis, who likewise panned Hoyle's panspermia speculations. But, again what may have been going on was science culture clash.

Some of the notions of Hoyle and his collaborator, N.C. Wickramasinghe, which seemed so outlandish in the eighties, have gained credibility with new discoveries concerning extremophiles and the potential of space-borne microorganisms.

12. An early version of this essay contained a serious misstatement of Crick's point, which occurred because of my faulty memory.

13. W.T. Stace in his article Broad's view on religion in The Philosophy of C.D. Broad (Library of Living Philosophers 1959). P 173.

14. Logician David Berlinski points out in an April 2003 Commentary article that Dawkins had told readers of his River Out of Eden: A Darwinian View of Life (Basic 1995) that a "computer simulation" had shown that an eye could have evolved over several hundred thousand years, though no such simulation existed, as one of the paper's authors, Dan-Erik Nilsson admitted. Dawkins had confused the notion of "computer simulation" with that of "mathematical model." Dawkins had referred to "A Pessimistic Estimate of the Time Required for an Eye to Evolve" by Nilsson and Susanne Pelger that appeared in Proceedings of the Royal Society, London B, 1994.

<small><i>Chapter 18</i></small><br> Chasing Schrödinger's cat

This chapter added in November 2022 Every now and then, I review an old book. This review written in 2022 concerns a title from 2019, whi...