Introduction
When it comes to such things as whether or not the US administration was behind the 9/11 attacks, if there is a global Zionist conspiracy, whether the moon landings were faked or not, and so on, then there exist objective truths of the matter. If we were in full possession of all the relevant facts, and were cognitively accurate in our analysis of these facts, then we would have no doubt as to what these truths are. The problem arises because we usually don’t have access to all of the relevant facts, and our analysis of the facts that we do have can err in a number of ways. Nevertheless, there are still better and worse ways of getting to the truth.
If we wish to form warranted beliefs about these and other matters, then we first need to establish as many well-grounded and relevant facts as possible, and then use a good method to form our beliefs based upon these facts. To be deemed a good method, it should exhibit predictive success (which is what we really mean by something being true), and convergent accumulation of consistent results. That is, we should expect it to routinely produce propositions that have predictions that match reality, and continue to do so if we investigate them from different angles. The methods of reason and science have proven to be the preeminent methods for learning about the world.
Conversely, if we start with few well-grounded facts, or supposed facts that are actually false, and then use a poor method in order to form our beliefs from these facts, then our beliefs will likely be false. In the worst case, if we form our beliefs based upon nothing more than hearsay, desire, and speculation, then they are almost guaranteed to be false. The reasons for this are firstly that false beliefs are just as easily propagated as true ones; and secondly that the number of false beliefs will always vastly outnumber the number of true ones, so any arbitrarily chosen belief that is not well-founded (i.e. not based upon evidence and reason) will almost certainly be false. Based upon these thoughts, here are a number of ways in which conspiracy theories go wrong:
1. They get their facts wrong
If we start with incorrect facts, then any conclusions deduced or inferred from these ‘facts’ will be unwarranted. For example, some 9/11 conspiracy theorists believe it is a fact that mobile phones do not work in airborne planes, and then deduce from this ‘fact’ that the phone calls from the passengers and crew aboard the hijacked planes must have been faked. However, some of the calls made from the hijacked planes were made from air phones, not mobiles, so the objection is immaterial in that case. Moreover, it is not actually true that mobiles don’t work at all in planes that are airborne. A mobile phone will in fact often work when a plane is closer to the ground during the climb and the descent, and even sometimes when the plane is at cruising altitude if it is flying in the vicinity of a strong signal from a phone mast.
Another example of a mistaken fact is that the explosion in one of the tube trains on 7/7 came from underneath the floor – suggesting that a bomb had been planted there earlier (as part of a conspiracy by the government/security services, we are supposed to infer). However, this ‘fact’ is also incorrect. It came originally from an early report from one eyewitness aboard the tube train in question, and was then widely disseminated on the internet. However, later eyewitness reports – including those from passengers who were much closer to the explosion – made it clear that the source of the explosion was the terrorist’s backpack, and not somewhere underneath the train.
An oft-repeated claim made by 9/11 conspiracy theorists is that around 4000 Jews stayed away from work at the World Trade Centers on September 11th. However, this ‘fact’ is also false. Estimates from the 1700 dead based upon the religion listed put the number of Jewish at 270. Another estimate based the last name of victims put the total number of Jews at up to 400. A survey of 390 victims who worked for Cantor Fitzgerald found that 49 were Jewish. This accords well with New York State’s population in general, in which 9% are Jewish.
Fake Moon landing conspiracy theorists point to many supposed impossibilities or inconsistencies with the evidence. One of these is the ‘fact’ that a flag cannot wave in a vacuum, but that the American flag was seen to wave in the film footage from the Moon. However, in one photo that is often cited, the flag is ‘waving’ because the pole to which it is attached is being rotated by the astronaut. The fact that there is no atmosphere is irrelevant in this case. In other cases, the flag gives the appearance in photos of waving because the horizontal rod from which it is deployed was not fully extended, so the flag was not fully unfurled. Much more about this here: http://www.badastronomy.com/bad/tv/foxapollo.html.
There are innumerable other examples where conspiracy theorists have failed to get important facts correct. Of course, they could assert that their facts are correct, and that the generally accepted facts of the matter are actually all part of the conspiracy. However, they would further sacrifice the plausibility and parsimony of their theory if they were to take this approach. For more on this, see below.
2. They make errors of reasoning
One very prominent error of reasoning that usually features in conspiracy theories is a form of cui bono, in that they look for who might benefit from the conspiracy (often the US or other government), and then deduce that this agent is therefore responsible for the act in question. For example, 9/11 conspiracy theorists argue that the Bush administration had much to gain from perpetrating the attacks and blaming them on Al Qaeda, as this would give them an excuse to go into Iraq and Afghanistan, thereby getting rid of an uncooperative Sadam Hussein, getting control of some of the region’s oil, and distracting the electorate from other of the administration’s policy failures at home etc. However, we are not entitled to deduce logically from the existence of these benefits (even if we suppose them to be true) to the US government being responsible for the attacks. It is a logical fallacy of the form:
P1: X would benefit if event Y was to happen
P2: Event Y happened
C: Therefore, X caused event Y to happen
A simple counterexample should suffice to show the fallacy:
P1: My local pizza takeaway would benefit from a Credit Crunch (as more people would then buy pizza)
P2: The Credit Crunch happened
C: Therefore, my local pizza takeaway caused the Credit Crunch to happen
In response to this, one could argue that the US administration had the means, as well as the motive, to carry out the 9/11 attacks and then cover it up, whereas my local pizza takeaway didn't. However, as I will argue below, whether it actually had the means to do this is part of my disagreement with conspiracy theorists - so that response would beg the question. Moreover, even if we were to grant, for the sake of argument, that the agent in question has both the motive and the possible means, I can give a new counterexample as follows:
P1: John would benefit (financially) if his wife's parents were to die
P2: John's wife's parents are killed in an apparent accident
C: Therefore, John caused his wife's parents to be killed (and made it look like an accident)
Whilst it's possible that John did indeed arrange to have his wife's parents killed, the mere fact that he had both the motive and conceivable means doesn't lead to that conclusion - the argument is a non-sequitur. That he had motive and possible means might cause the police to question John, but if there was overwhelming evidence that he didn't commit any crime, then means and motive alone would carry little weight. So, we cannot legitimately deduce from identifying who would benefit from the occurrence of some event to the conclusion that the agent in question caused the event to happen (even if the agent in question might have the means to cause the event).
Some may attempt at this point to rescue the weak motive and means argument above by adding that we have prior experience of the type of agents under consideration (typically some Western government, or State or military body) carrying out similar acts, and therefore they should come under strong suspicion whenver something like this happens. However, it is precisely because we don't have any good precedents for such large-scale, audacious, and often ruthless conspiracies that they are the stuff of conspiracy theory at all (although small-scale and mundane conspiracies have been exposed many times). If they were the type of commonplace event that would lead us to rationally suspect the US administration (or whoever) when something like 9/11 happened (or whatever conspiracy theory du jour is under discussion), then their potential guilt would be discussed and investigated widely and openly, and not just confined to the conspiracy theorists on the fringe. So, even this fails.
Another reasoning error that conspiracy theorists make is that they refuse to accept any evidence that would refute their theory, but are then extremely credulous of any prima facie evidence that would be expected on their theory. In other words, they are just looking to verify their theory, and adopt totally different bars with regard to supporting and contrary evidence – with any supporting evidence being accepted almost without question, and any contrary evidence being rejected out of hand or explained away by the introduction of some ad-hoc element (e.g. that apparently contrary evidence has been planted by the conspirators etc). In addition to the problems of confirmation bias, this strategy effectively makes the conspiracy theories unfalsifiable, as no evidence whatsoever would ever be accepted as refuting them. To hold such a belief is irrational, as it could just as easily be false as true, but there would be no way for you to ever tell, as no evidence would ever convince you of its falseness. As Karl Popper said, a theory that explains everything explains nothing.
Conspiracy theorists also err by imbuing the conspirators with omnipotence and omniscience, in that they are supposed to have almost limitless knowledge and power to plan, commit, and then cover up their conspiracies. They believe this despite abundant evidence for the widespread incompetence and ignorance of government, security services, and other agencies, the general fallibility of human beings, and the fact that even small-scale conspiracies are often bungled and uncovered by the mainstream media.
Another error of reasoning that conspiracy theorists make is to argue “possibly, therefore probably”. Yes, it is possible that the US administration was behind the 9/11 attacks, that the Moon landings were faked, that the AIDS virus was created artificially in order to kill black people (or homosexuals), or even that the world is secretly run by a secret cabal of giant lizards or that the previous pope was a robot etc. However, the fact that something is theoretically possible does not mean that it is at all probable, or that it is anything like the best explanation for the facts at hand.
In general, when we are faced with multiple hypotheses that would all predict some set of observations (as we will always be in the real world), then if we are looking for the truth we need to look for the best explanation for the observations. We could test hypotheses formally using Bayes’ Theorem (http://en.wikipedia.org/wiki/Bayes%27_theorem), but, less formally we should apply abductive reasoning. This methodology (which can be demonstrated using Bayes’ Theorem to be valid), calls upon us to compare possible explanations for some set of observations by looking at their plausibility, parsimony, explanatory scope, and explanatory power. In the case of conspiracy theories, they fail primarily in terms of plausibility and parsimony.
For example, if we look at the 9/11 attacks, we are supposed to believe that there was some vast conspiracy involving the Administration, the security services, and the military. Furthermore, either the passengers on the planes were taken into hiding, or else killed. The terrorists would either have been planted, or else would be just patsies. All of the evidence pointing to Al Qaeda would have been manufactured. And with all of this, we are to believe that either the mainstream media is also involved in the conspiracy, or else that the (notoriously inept) government and its agencies managed to keep all of the incriminating evidence secret from the media. Of course, small conspiracies have come to light in the past (such as the Watergate and Contra scandals) but, despite being much less ambitious in scale, they notably were still exposed. We have no precedent for such large-scale and audacious conspiracies as would be required to fake the 9/11 attacks, or to create and release some virus and then convince health organisations, doctors, and worldwide media that it arose naturally. As such, the existence of such a conspiracy is inherently implausible (even if not actually impossible).
By contrast, we have lots of examples of terrorist attacks – including large-scale ones from Al Qaeda. Although this one was more audacious than previous ones, there is nothing inherently implausible about it, as it required nothing more than some planning and the involvement of some Muslims who were capable of learning some very rudimentary flying skills and willing to die for their beliefs. We have lots of examples of planes being hijacked, and lots of examples of Muslim fundamentalists carrying out suicide attacks – so there is lots of precedent. And, contrary to some other speculations, experts agree that only basic flying skills were required in order to fly the planes into the Towers and Pentagon. So, on balance, this explanation is far more plausible than the conspiracy alternative. Similarly, we have lots of precedent for viruses arising, mutating, and spreading naturally (just think of previous flu epidemics, for example). So, again, this is inherently more plausible than the alternative explanations. The only way around this is to presume that all of these precedents were themselves conspiracies, in which case the theories would gain plausibility at the expense of parsimony – see below.
When we examine a theory’s parsimony, we are applying Occam’s Razor (http://en.wikipedia.org/wiki/Occam%27s_razor). That is, when we have two or more theories that both predict the observations, then the simpler one is to be preferred. Or, to put it another way, we should not multiply assumptions and entities beyond necessity. For example, if we were to find a crime scene in which there was a single bullet hole in the window of house, and one bullet lying on the carpet inside, should we rather presume that just one bullet was involved, or that multiple bullets were fired through the same hole and then all but one removed from the inside of the house? Occam’s Razor would lead us to choose the former explanation, as it posits no more assumptions or entities than are needed to explain the observations. This is a good methodological rule of thumb, as it prevents us from going beyond what is supported by the evidence (and can be shown by application of Bayes’ Theorem to increase the probability that the explanation is true).
Now, in addition to their general lack of plausibility, conspiracy theories are usually for more complex and far more ad-hoc (incorporating assumptions and entities into the theories that are not themselves independently justified) than are the generally accepted explanations. Instead of just some fanatical Muslims hijacking planes and then into the Towers and the Pentagon, we have to invent some huge complex of interrelated explanations for what actually happened, how and what we were led to believe happened, and why none of this has been exposed (other than to a few diligent conspiracy theorists). Every time we think of a way that it could go wrong (eyewitnesses telling the true story, any of the hundreds or thousands of people involved could go to the press with damning evidence, or some other incontrovertible evidence coming to light), then we are forced to add some additional ad-hoc element to our theory in order to explain this away (eyewitnesses killed, the media are part of the conspiracy, all other evidence planted or changed etc). Hence, the application of Occam’s Razor would lead us to reject the unnecessarily complex conspiracy theory in favour of the much simpler explanation that explains the same observations with far fewer unproven assumptions.
Conclusion
In conclusion, conspiracy theories generally contain important factual mistakes and commit a number of errors of reasoning. They tend to move from some agency having something to gain from a particular state of affairs to the unjustified conclusion that therefore said agency actually brought about that state of affairs. Furthermore, they are overly sceptical of any evidence that goes against their theory, and overly credulous of any that supports it. By adding ad-hoc elements to the theory to explain away apparently contradictory evidence, they actually render the theory effectively unfalsifiable. They also tend to assume practically unlimited power and knowledge on the part of the conspirators. Whilst they might in principle be true, possible doesn’t mean probable, and conspiracy theories are, in general, far less plausible and parsimonious than the ‘official’ explanation.
Although small-scale and mundane conspiracy theories do of course take place all the time - and are regularly exposed as such - there is little reason to suppose that the sort of large-scale and hugely elaborate conspiracy theories beloved by conspiracy theorists are actually happening around us. Certainly the burden of proof is on the conspiracy theorists to provide the extraordinary evidence for such extraordinary claims, as they are the ones challenging the accepted and (at least prima facie) evidentially supported view – but this is a burden they have singly failed to meet so far.
Thoughts and comments from a reality-based perspective - where beliefs are based upon reason and evidence, and the strength of these beliefs scales with the strength of the evidence.
Sunday, October 11, 2009
How cutting-edge physics supports Naturalism
Introduction
In this (quite technical) post I will consider some of novel theories of cutting-edge physics. I will discuss how, if they are true, they would support Naturalism, and then present some arguments in their favour. By naturalism I am referring specifically to metaphysical naturalism, as opposed to methodological naturalism. Metaphysical naturalism is a philosophical worldview that supposes that nature is all that exists, and that the supernatural is therefore non-existent (some versions of methodological naturalism are agnostic on the existence of the supernatural, but rules it not amenable to scientific investigation). By the supernatural, I mean pure or reductively uncaused mental entities[i]- which include such things as gods, spirits and the like. Furthermore, naturalism attempts to explain and account for all phenomena and values by strictly natural means, and supposes that nature is amenable to investigation by the natural sciences.
Now I will list and briefly describe some of the cutting-edge theories under discussion.
The theories
1. String Theory[1]
In theoretical physics, string theory is a mathematical theory which posits that the elementary particles are actually vibrations of tiny (Planck length, i.e. about 10^-33 cm) one-dimensional extended objects known as strings. These strings are posited to move in ten spacetime dimensions, in which the six unobserved dimensions (beyond the three of space and one of time) are rolled up into complex shapes (i.e. compactified).
String theory has now moved on to encompass a set of five related superstring theories (known as ‘super’ because they incorporate supersymmetry), and M-theory. This last theory unifies the five superstring theories as limits of a single 11-dimensional theory in which strings are really special cases of objects of various dimensions, collectively known as branes[2], moving in this 11-dimensional spacetime. 2-dimensional branes are known as 2-branes (or membranes) and, in general, p-dimensional branes are known as p-branes - where p is any whole number less than 10. According to M-theory, these branes may grow to be as large as the universe itself.
String/M-theory is so far the best candidate ‘grand unified theory’ (or GUT) that unifies the fundamental forces and particles, including gravitation. Prior to string theory, attempts to incorporate gravitation into a GUT had failed because a never-ending series of infinities plagued the equations, due to the mathematical nature of point particles. String theory also explains the features of the Standard Model, which couldn’t be explained prior to string theory. String theory posits that the electrons and quarks within an atom are not 0-dimensional objects, but 1-dimensional strings. These strings can move and vibrate, giving the observed particles their flavor, charge, mass and spin.
2. Supersymmetry[3]
In particle physics, supersymmetry is a symmetry that relates elementary particles of one spin to another particle that differs by half a unit of spin (known as superpartners).
3. Symmetry Breaking[4]
Symmetry breaking in physics describes a phenomenon whereby fluctuations acting on a system crossing a critical point decide a system’s fate, by determining which branch of a bifurcation is taken. Of particular relevance here is Spontaneous symmetry breaking[5], which describes the case where the laws are invariant but it appears the system isn’t because the background of the system, its vacuum, is non-invariant.
4. Quantum Fluctuations[6]
In quantum physics, a quantum fluctuation is the temporary change in the amount of energy in a point in space, arising from Werner Heisenberg’s uncertainty principle.
5. Cosmic Inflation[7]
In cosmology, cosmic inflation is the theory that, within the first second after the Big Bang, the nascent universe went through a phase of exponential expansion driven by a negative pressure vacuum energy density. During this inflationary phase the universe is proposed to have doubled in size every 10^-34 s, with the rapid inflation decaying away after 10^-32 s. As the scalar field slowly relaxed to the vacuum, the cosmological constant went to zero, and space began to expand as we see it in the observable universe.
Cosmic inflation explains why the universe appears flat, homogeneous and isotropic, and also explains the origin of the large-scale structure of the universe (with the magnification to cosmic size of quantum fluctuations in the original microscopic inflationary region acting as seeds for the galaxies, etc).
6. Eternal Inflation[8]
Eternal inflation is an inflationary universe model in which our universe is just one ‘bubble’ of expanding space among many (possibly an infinite number), and other big bangs occur throughout the wider superstructure. These bubbles, or pocket universes, emerge spontaneously from this eternal background space-time ‘foam’ due to quantum fluctuations, and then inflate exponentially. This inflation will tend to decay, as in the case of our universe, but will occasionally increase (as the strength of the inflation field will fluctuate randomly and spontaneously from place to place and time to time). Although in the minority, these regions of increasing inflation would dominate in terms of volume of space.
It is postulated that the particular characteristics (fundamental constants and physical laws) of each ‘universe’ freeze into place during the first moments of the universe’s existence due to spontaneous symmetry breaking (and are therefore probably random).
7. Multiverse[9]
The multiverse (or megaverse) is the hypothesised infinite assemblage of bubble or pocket universes produced by some universe-generating mechanism such as Eternal (or Chaotic) Inflation (another is Smolin’s fecund universes theory), of which our ‘universe’ is but one infinitesimal part. This multiverse is itself embedded in inflating space that exists without end. According to calculations based upon inflation theory, our observed universe would be embedded in a region that is approximately10^10,000,000,000 km across (by comparison, the observable universe is 10^23 km). Beyond the edge of our region, space would still be inflating by doubling in size every 10^-34 s, as other regions of space that are still in their inflationary phase (unlike ours) would dominate, so the gaps between the regions are growing much faster than are boundaries, meaning that the pocket universes won’t intersect. Some proponents of the multiverse argue that it has always existed (hence, ‘eternal’ inflation).
8. String Landscape[10]
In string theory, calculations show that there is a huge number of possible ways in which the additional unobserved dimensions may be compactified. Altogether probably an infinite number, but at least 10^500 variations may be cosmologically stable, producing metastable vacua.[ii]
Some scientists (e.g. Leonard Susskind, Andrei Linde, and Martin Rees) propose that each of these string theory solutions corresponds to a possible universe within an overall multiverse If this is true, then Eternal inflation would give a mechanism for populating all of the possible solutions within the string landscape. Each type of compactification would then produce a different universe consisting of the non-compactified dimensions. In these possible universes the fundamental physical constants, types and sizes of the forces and particles, the nature of the physical laws, and even the number of observable dimensions would vary (even though some variations might be tiny e.g. a variation in the 5th decimal place of the mass of the electron). Each of these corresponds to a solution in the string landscape and, due to the quantum mechanical nature of the universe-generating mechanism, the solution is hypothesized to be ‘chosen’ at random.
As I will explain later, this synthesis of the String Landscape and Eternal Inflation provides a possible solution to the so-called fine-tuning problem (it should be noted that Smolin selection and Eternal Inflation even without the string landscape may also do this).
How might these theories support Naturalism?
In this analysis I will focus upon the String Landscape and its synthesis with Eternal Inflation, and attempt to show how Naturalism would be supported if these theories are true. Henceforth, I will refer to the conjunction of these particular theories as SLEI.
One might intuit that the truth of SLEI would add weight to the case for Naturalism but, in this analysis I would like to put this intuition on a more rigorous footing. Therefore, I will make use of Bayes’ Theorem (but will not attempt to justify the use of the theorem itself, which has already been formally proven). This is a mathematical formula used for calculating conditional probabilities.[iii] It is particularly useful as a means of calculating posterior probabilities given a set of observations. The particular form of Bayes’ Theorem that I will use is the following:
P(h/e&b) = P(h/b) x P(e/h&b) / ([P(h/b) x P(e/h&b)] + [P(~h/b) x P(e/~h&b)])
Where:
h = the hypothesis under consideration (in this case, that Naturalism is true)
b = the entirety of our relevant background knowledge
e = the entire collection of evidence that is directly relevant to ‘h’
P(h/e&b) = the probability of h given e and b
P(h/b) = the probability of h given only b
P(e/h&b) = the probability of e given h and b
P(~h/b) = the probability of (not h) given only b [the complement of P(h/b) i.e. P(h/b) + P(~h/b) = 1]
P(e/~h&b) = the probability of e given (not h) and b [NB. independent of P(e/h&b)]
So, in this particular case, P(h/e&b) is the probability that naturalism is true given the entirety of our relevant background knowledge, and the entire collection of evidence directly relevant to this hypothesis. Now, I will not attempt to insert values for any of the terms in order to calculate P(h/e&b) directly. Rather, I will instead demonstrate that the truth of SLEI would increase the value of P(h/e&b) – whatever its actual value is – by determining what would happen to the value of P(h/e&b) if we vary some of the other terms in the equation accordingly.
I will now look at each of the terms in Bayes’ theorem for the hypothesis that naturalism is true, and evaluate how the truth of these theories would affect them.
1. Prior Probability
Firstly, P(h/b). In our case, it is the probability that naturalism is true given the entirety of our relevant background knowledge, and before we examine any specific evidence for the truth or otherwise of naturalism – also known as the prior probability that naturalism is true. Now, every cause that has ever been investigated and established by a reliable, truth-finding method (e.g. science) has turned out to be a purely natural one. Moreover, such reliable methods could establish that a supernatural cause exists, if such a cause does in fact exist, and is open to investigation. However, not once have such reliable methods ever shown a cause to be supernatural. So we have:
[A1]
P1: If reliable methods have only ever proven natural causes to exist in our world, then probably every cause in our world is natural.
P2: Reliable methods have only ever proven natural causes to exist in our world.
C: Therefore, probably every cause in our world is natural.
As this is an inductive argument, it does not establish the certainty that there exist only natural causes in the world. Rather, it makes it merely probable. Hence, P(h/b) is high – certainly higher than 0.5 (in order for it be considered probable). It follows that its complement, P(~h/b) [the probability that not naturalism (but supernaturalism) is true given the entirety of our relevant background knowledge] is necessarily low – certainly lower than 0.5.
Now, for the purposes of this analysis I will decide to bracket the non-overlapping sets e and b such that SLEI (if true) is part of the set b and not part of the set e. That is, I will consider that SLEI is a subset of our background knowledge, rather than being a subset of the evidence that is directly relevant to h. This would have the following effect upon the terms within Bayes’ Theorem:
P(h/b) will be higher. One reason for this is that the argument [A1] above is given greater weight, because a reliable method (i.e. science) would have shown even more causes in the world to be natural. Moreover, these causes are particularly important ones, as they explain how our universe came into existence, with its particular physical properties, including microscopic ones.
Another reason why P(h/b) would be higher is that the truth of SLEI would defeat some potential defeaters for h. For example, it would falsify the fine-tuning (or anthropic) argument for the existence of God (of the standard Judeo-Christian variety). According to this argument, the mere fact that the universe allows life to exist in the first place is evidence of intelligent design. For instance, for life as we know it to evolve, it is supposed that there must be an unlikely combination of just the right initial conditions and just the right values of the fundamental physical constants (so-called anthropic coincidences). According to the argument, if any one of the values of up to 26 dimensionless fundamental physical constants[11] from the Standard Model wasn’t extremely close to the actual value we find, then life would not be possible in our universe. Martin Rees reduces this to just 6 dimensionless constants whose values he deems fundamental to present day physical-theory and the known structure of the universe[12].
In either case, the apparent extreme unlikelihood of the universe forming by chance with just the right conditions to allow life is presented as evidence that those conditions were actually set by an intelligent designer in order to produce life. This cosmic intelligence is usually supposed to be God, although it should be noted that the argument doesn’t lead to the designer being any particular god, or even a god at all. It might instead be a team of gods, some other demiurge, a highly advanced universe-creating alien, or any of an infinite number of other possibilities. More formally, we have:
[A2]
P1: If the probability is small enough that our universe is life-bearing by chance alone, then it is more probable that our universe was intelligently designed to be life-bearing.
P2: The probability is small enough that our universe is life-bearing by chance alone
C: It is more probable that our universe is intelligently designed to be life-bearing.
Corollary: The intelligent designer in question is God.
Now, there are a couple of points that should be mentioned with regard to this argument. Firstly, in order to apply the argument in practice, some probability threshold would need to be determined below which we could agree that it is more likely that our universe is intelligently designed to be life-bearing, rather than being so by chance alone. One possible candidate for this threshold might be the Dembski threshold of 1 in 10^150[13]. Although the choice of threshold is moot, unless design can be shown to be impossible (and we should note that science hasn’t ruled out the possibility that our universe was designed by aliens[14], for example), then there must be some threshold below which it becomes more probable that our universe was designed to be life-bearing.
Secondly, with regard to P2, it has not been proven that the fundamental physical constants are in fact so improbably ‘fine-tuned’, or that they needed to be so for our universe to be life-bearing. It may be that there are really only one or two truly fundamental physical constants, and/or the values that these constants could take are constrained to a small set of possibilities. In that case, the total number of possible universes would be relatively small, with at least one of these possibilities being life-bearing (ours). Whether this is or is not the case, we may find that some sort of life would still have been possible in the universe even if the fundamental physical constants were significantly different to those that we find. Victor Stenger has argued along these lines[15].
However, even if we were to accept that our universe is indeed precisely and improbably fine-tuned for life, then we still need not invoke design as the explanation. If SLEI is true, the existence of a multiverse in which all possible combinations of fundamental physical constants (as well as forces, mathematical laws, etc.) will eventuate in some universe or other as part of the string landscape, means that it is guaranteed that a life-bearing universe will come to exist by chance alone (possibly an infinite number of times). This is because every possible solution from the string landscape will come to exist (if the selection is random, as, due to its quantum mechanical nature, it is proposed to be), and our universe is a possible solution within the string landscape that leads to a life-bearing universe. Thus P2 would be false, and the argument [A2] would fail.
One further point worth mentioning is that even if our universe is all that exists (i.e. there is no multiverse), and it is fine-tuned for life, we still cannot legitimately infer a supernatural source. In fact, as Michael Ikeda and Bill Jefferys showed[16], the fine-tuning would actually count against a supernatural origin for our universe. The argument that if a universe fine-tuned for life is improbable on naturalism, then the fact we find ourselves in a fine-tuned universe implies that naturalism is improbable is a confusion of two different types of conditional probabilities. In particular, the fact that an outcome is highly improbable does not imply that the hypothesis that generates that outcome is itself improbable. You need to actually compare the probabilities of obtaining the observed outcome under all hypotheses, and look for the ones that are more probable. Whilst naturalism may still turn out to be improbable on fine-tuning, it may actually be the most probable hypothesis – certainly far more probable than supernaturalism of the standard Christian variety (which is rendered improbable because we would expect God to sustain life without any need for the universe itself to be fine-tuned for life). Secondly, we must do the calculations based upon the evidence that we actually have. This includes the fact that we know our universe contains life, so the possibility of a naturalistic universe with no life is purely hypothetical.
As an aside, this is a particularly egregious example of a corollary that doesn’t follow necessarily or even probably from the conclusion, but which is often tacitly assumed to by Christian proponents of the argument. At the very least it is supposed by them to do a lot of work towards establishing the existence of God, which I think it manifestly fails to do, as getting from the existence of some inscrutable designer to God is actually the hard bit, as the claim for a supernatural designer with all sorts of the amazing and specific powers of God is a much more extraordinary one and thus requires much more extraordinary evidence. This is analogous to Christians thinking that if they can show that certain people or places mentioned in the story of the resurrection of Jesus actually existed, then this does a lot of work toward establishing that the Resurrection actually happened. However, again it is the leap to the supernatural that is the giant and extraordinary leap, and establishing some other mundane historical details in the Bible does virtually nothing to help bridge that gap. For example, if I tell you that I have a friend called John who can levitate at will, then just showing my friend John to you (not levitating) does nothing to prove that John can actually levitate. The mere existence of a friend called John is not at all extraordinary or contentious. The levitation part of my claim is the part that is contentious, and that requires the robust supporting evidence.
Another possible defeater for h is the Cosmological Argument i.e.
[A3]
P1: Everything that begins to exist has a cause
P2: The universe began to exist
C: Therefore, the universe has a cause
Corollary: This cause is God
This is another example of a corollary that doesn’t follow from the conclusion. Moreover, if SLEI is true, then the universe in question would just be our particular universe (as just one part of the multiverse) and the cause in question would be some quantum mechanical universe-generating mechanism. Hence, there would indeed be a cause, but it would be a completely natural one, and the corollary would be falsified. And if we have a multiverse that is eternal (as part of SLEI), then P2 would be false, and the conclusion would not follow (and P1 may be false anyway, even if the multiverse is not eternal, as all we know is that everything we have observed to begin within our universe has a cause, which doesn’t necessarily mean that this concept is meaningful when talking about the beginning of the multiverse as a whole).
Of course, if SLEI is true, we may still be left with no explanation for why the multiverse exists at all, or where the quantum mechanical universe-generating mechanism came from, or why string/M-theory and its universe generating mechanism are as they are. In such a case, we may just have to take this as a brute fact—something that exists necessarily and has no explanation. This is no worse than the God explanation though, where God is taken as the brute fact, and is actually far more plausible and parsimonious.
Since there remains no sound or valid argument for design, P(~h/b) will be lower than P(h/b). This follows from P(h/b) being higher, as P(~h/b) is its complement. In this analysis, P(e/h&b) and P(e/~h&b) will remain unchanged, as I have bracketed b and e such that SLEI (if true) would be part of the set b and not part of the set e.
Now, let’s go back to our formulation of Bayes’ Theorem and determine what effects this will have on the probability that naturalism is true given the entirety of our background knowledge and evidence directly relevant to this i.e.
P(h/e&b) = P(h/b) x P(e/h&b) / ([P(h/b) x P(e/h&b)] + [P(~h/b) x P(e/~h&b)])
I won’t derive this in general, but will instead substitute some sample (and very rough) values into the equation. So, just for the sake of argument, assume that without Greene’s novel theories:
P(h/b) = 0.95
P(~h/b) = 0.05
P(e/h&b) = 0.9
P(e/~h&b) = 0.3
So:
P(h/e&b) = 0.95 x 0.9 / [(0.95 x 0.9) + (0.05 x 0.3)]
= 0.855 / [0.855 + 0.015]
= 0.983 (3dp)
If that were the case, then with Greene’s novel theories, something like the following would result:
P(h/b) = 0.99 [i.e. higher than it was before]
P(~h/b) = 0.01
P(e/h&b) = 0.9
P(e/~h&b) = 0.3
So:
P(h/e&b) = 0.99 x 0.9 / [(0.99 x 0.9) + (0.01 x 0.3)]
= 0.891 / [0.891 + 0.003]
= 0.997 (3dp)
Hence, with these particular values, the truth of SLEI would increase the probability that naturalism is true given the entirety of our background knowledge and evidence directly relevant to this. This is what we would intuitively expect, and is what I was trying to establish, but I won't derive it in general. However, it is at least plausible this is true generally.
This analysis might be repeated by bracketing the non-overlapping sets e and b such that SLEI (if true) are part of the set e and not part of the set b (or even part in b and part in e, if that could be done). I have not done this, but I would expect the results to be similar.
What arguments could be advanced now that those theories are more probably than not true?
Some physicists believe that these cutting-edge theories are more probably than not true because they are powerful and elegant. String/M-theory, for example, combines quantum mechanics and general relativity into a quantum theory of gravitation, and can also incorporate the standard model of particle physics. As such, it is a good candidate for a theory of everything. I think that there is something in this intuition, as the concepts of power and elegance or beauty (as physicists and mathematicians use it) actually encapsulates the idea of explaining a great deal of data with a relatively small set of assumptions.
However, to put this on a firmer footing, I would suggest that the theories are more probably true than not for the following reasons:
1. They are plausible. That is, they follow from and don’t contradict known facts and other good theories of how the universe is. String theory, for example, is quantum mechanical, Lorentz invariant, unitary, contains Einstein’s General Relativity as a low energy limit, and can incorporate the standard model of particle physics. Eternal inflation follows from the theory of cosmic inflation, quantum fluctuations, and symmetry breaking.
2. They are parsimonious. That is, they don’t require us to make up too much out of whole cloth, i.e. there are few completely ad hoc assumptions. Postulating the existence of strings or branes themselves is ad hoc, as is the assumption of additional compacted dimensions, but not much else needs be made up (as opposed to the God theory, which requires masses of ad hoc elements, including the existence of a supernatural God, with all sorts of characteristics and desires, the most powerful mind possible, and all sorts of manoeuvrings to explain away the lack of fit of prediction and evidence). Eternal inflation requires few ad hoc elements, and none is physically implausible.
3. They have good explanatory scope. That is, they predict many facts about the universe that we actually find to be true (and have not made any predictions that have so far been proven to be false). Eternal inflation explains the size, age, evolution, and macroscopic and microscopic structure of the universe, and its apparent fine-tuning for life. By contrast, God exists doesn’t really predict much about the universe; and what you might expect it to predict is not actually found when we look at the evidence.
4. They have good explanatory power. That is, they make the facts that they predict highly probable.
From the scientific point of view, testing predictions that could falsify these theories or otherwise is very difficult, as the energies required to test string theory are huge (but may well be available to us in the future), and finding any direct evidence of other universes in the multiverse is likely to be impossible.
However, there might be indirect ways of testing them. For example, there is some suggestion that one or more of the fundamental constants may have changed during the evolution of our universe[17]. If the fundamental constants can change over time in our own universe, then they are clearly not fundamentally invariant. Since this is a requirement for Eternal inflation, amongst other multiverse theories, then this offers some support to these theories. Another type of indirect support for SLEI is that, if the fundamental constants and other fundamental properties of our universe (e.g. forces, particles, physical laws) are a random selection from what is possible, then we would expect our universe to be only just barely life-bearing, rather than strongly so. An analogy would be a lottery in which only 3 correct balls from 6 are required in order to win a prize. If we pick a random winner, then we would expect them to have only just won a prize (i.e. to have 3 or possibly 4 correct balls), rather than getting all 6 balls correct. When we look at the values of the fundamental constants and other things (such as dark energy) in our universe, it does indeed appear to be the case that the universe is no more bio-friendly than it needs to be.
So, overall, we may have some indirect evidence that SLEI is correct. In addition to this, SLEI also satisfies the criteria usually taken for being a good explanation (plausibility, parsimony, explanatory scope and power).
[1] See: http://en.wikipedia.org/wiki/String_theory
[2] See: http://en.wikipedia.org/wiki/Brane
[3] See: http://en.wikipedia.org/wiki/Supersymmetry
[4] See: http://en.wikipedia.org/wiki/Symmetry_breaking
[5] See: http://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking
[6] See: http://en.wikipedia.org/wiki/Quantum_fluctuation
[7] See: http://en.wikipedia.org/wiki/Cosmic_inflation
[8] See: http://en.wikipedia.org/wiki/Eternal_inflation
[9] See: http://en.wikipedia.org/wiki/Multiverse
[10] See: http://en.wikipedia.org/wiki/String_landscape
[11] See: http://en.wikipedia.org/wiki/Dimensionless_physical_constant#The_Standard_Model
[12] See: http://en.wikipedia.org/wiki/Dimensionless_physical_constant#Martin_Rees.27s_Six_Numbers
[13] See, for example: http://richardcarrier.blogspot.com/2009/05/statistics-biogenesis_01.html
[14] See: http://en.wikipedia.org/wiki/Fine-tuned_Universe#Alien_design
[15] See: http://www.colorado.edu/philosophy/vstenger/Cosmo/FineTune.pdf#search=%22Fine%20tuned%20universe%22
[16] See: http://bayesrules.net/anthropic.html
[17] See, for example: http://arxiv.org/abs/0810.1356
[i] As per Carrier’s definition in “Defending Naturalism as a Worldview: A Rebuttal to Michael Rea’s World Without Design” (The Secular Web: 2003), www.infidels.org/library/modern/richard_carrier/rea.shtml.
[ii] As calculated in Raphael Bousso & Joseph Polchinski, “Quantization of Four-form Fluxes and Dynamical Neutralization of the Cosmological Constant,” High Energy Physics (Theory) 18 Apr 2000, http://arxiv.org/abs/hep-th/0004134v3; and Michael R. Douglas, “Basic results in Vacuum Statistics,” High Energy Physics (Theory) 20 Sep 2004, http://arxiv.org/abs/hep-th/0409207.
[iii] See http://plato.stanford.edu/entries/bayes-theorem/.
In this (quite technical) post I will consider some of novel theories of cutting-edge physics. I will discuss how, if they are true, they would support Naturalism, and then present some arguments in their favour. By naturalism I am referring specifically to metaphysical naturalism, as opposed to methodological naturalism. Metaphysical naturalism is a philosophical worldview that supposes that nature is all that exists, and that the supernatural is therefore non-existent (some versions of methodological naturalism are agnostic on the existence of the supernatural, but rules it not amenable to scientific investigation). By the supernatural, I mean pure or reductively uncaused mental entities[i]- which include such things as gods, spirits and the like. Furthermore, naturalism attempts to explain and account for all phenomena and values by strictly natural means, and supposes that nature is amenable to investigation by the natural sciences.
Now I will list and briefly describe some of the cutting-edge theories under discussion.
The theories
1. String Theory[1]
In theoretical physics, string theory is a mathematical theory which posits that the elementary particles are actually vibrations of tiny (Planck length, i.e. about 10^-33 cm) one-dimensional extended objects known as strings. These strings are posited to move in ten spacetime dimensions, in which the six unobserved dimensions (beyond the three of space and one of time) are rolled up into complex shapes (i.e. compactified).
String theory has now moved on to encompass a set of five related superstring theories (known as ‘super’ because they incorporate supersymmetry), and M-theory. This last theory unifies the five superstring theories as limits of a single 11-dimensional theory in which strings are really special cases of objects of various dimensions, collectively known as branes[2], moving in this 11-dimensional spacetime. 2-dimensional branes are known as 2-branes (or membranes) and, in general, p-dimensional branes are known as p-branes - where p is any whole number less than 10. According to M-theory, these branes may grow to be as large as the universe itself.
String/M-theory is so far the best candidate ‘grand unified theory’ (or GUT) that unifies the fundamental forces and particles, including gravitation. Prior to string theory, attempts to incorporate gravitation into a GUT had failed because a never-ending series of infinities plagued the equations, due to the mathematical nature of point particles. String theory also explains the features of the Standard Model, which couldn’t be explained prior to string theory. String theory posits that the electrons and quarks within an atom are not 0-dimensional objects, but 1-dimensional strings. These strings can move and vibrate, giving the observed particles their flavor, charge, mass and spin.
2. Supersymmetry[3]
In particle physics, supersymmetry is a symmetry that relates elementary particles of one spin to another particle that differs by half a unit of spin (known as superpartners).
3. Symmetry Breaking[4]
Symmetry breaking in physics describes a phenomenon whereby fluctuations acting on a system crossing a critical point decide a system’s fate, by determining which branch of a bifurcation is taken. Of particular relevance here is Spontaneous symmetry breaking[5], which describes the case where the laws are invariant but it appears the system isn’t because the background of the system, its vacuum, is non-invariant.
4. Quantum Fluctuations[6]
In quantum physics, a quantum fluctuation is the temporary change in the amount of energy in a point in space, arising from Werner Heisenberg’s uncertainty principle.
5. Cosmic Inflation[7]
In cosmology, cosmic inflation is the theory that, within the first second after the Big Bang, the nascent universe went through a phase of exponential expansion driven by a negative pressure vacuum energy density. During this inflationary phase the universe is proposed to have doubled in size every 10^-34 s, with the rapid inflation decaying away after 10^-32 s. As the scalar field slowly relaxed to the vacuum, the cosmological constant went to zero, and space began to expand as we see it in the observable universe.
Cosmic inflation explains why the universe appears flat, homogeneous and isotropic, and also explains the origin of the large-scale structure of the universe (with the magnification to cosmic size of quantum fluctuations in the original microscopic inflationary region acting as seeds for the galaxies, etc).
6. Eternal Inflation[8]
Eternal inflation is an inflationary universe model in which our universe is just one ‘bubble’ of expanding space among many (possibly an infinite number), and other big bangs occur throughout the wider superstructure. These bubbles, or pocket universes, emerge spontaneously from this eternal background space-time ‘foam’ due to quantum fluctuations, and then inflate exponentially. This inflation will tend to decay, as in the case of our universe, but will occasionally increase (as the strength of the inflation field will fluctuate randomly and spontaneously from place to place and time to time). Although in the minority, these regions of increasing inflation would dominate in terms of volume of space.
It is postulated that the particular characteristics (fundamental constants and physical laws) of each ‘universe’ freeze into place during the first moments of the universe’s existence due to spontaneous symmetry breaking (and are therefore probably random).
7. Multiverse[9]
The multiverse (or megaverse) is the hypothesised infinite assemblage of bubble or pocket universes produced by some universe-generating mechanism such as Eternal (or Chaotic) Inflation (another is Smolin’s fecund universes theory), of which our ‘universe’ is but one infinitesimal part. This multiverse is itself embedded in inflating space that exists without end. According to calculations based upon inflation theory, our observed universe would be embedded in a region that is approximately10^10,000,000,000 km across (by comparison, the observable universe is 10^23 km). Beyond the edge of our region, space would still be inflating by doubling in size every 10^-34 s, as other regions of space that are still in their inflationary phase (unlike ours) would dominate, so the gaps between the regions are growing much faster than are boundaries, meaning that the pocket universes won’t intersect. Some proponents of the multiverse argue that it has always existed (hence, ‘eternal’ inflation).
8. String Landscape[10]
In string theory, calculations show that there is a huge number of possible ways in which the additional unobserved dimensions may be compactified. Altogether probably an infinite number, but at least 10^500 variations may be cosmologically stable, producing metastable vacua.[ii]
Some scientists (e.g. Leonard Susskind, Andrei Linde, and Martin Rees) propose that each of these string theory solutions corresponds to a possible universe within an overall multiverse If this is true, then Eternal inflation would give a mechanism for populating all of the possible solutions within the string landscape. Each type of compactification would then produce a different universe consisting of the non-compactified dimensions. In these possible universes the fundamental physical constants, types and sizes of the forces and particles, the nature of the physical laws, and even the number of observable dimensions would vary (even though some variations might be tiny e.g. a variation in the 5th decimal place of the mass of the electron). Each of these corresponds to a solution in the string landscape and, due to the quantum mechanical nature of the universe-generating mechanism, the solution is hypothesized to be ‘chosen’ at random.
As I will explain later, this synthesis of the String Landscape and Eternal Inflation provides a possible solution to the so-called fine-tuning problem (it should be noted that Smolin selection and Eternal Inflation even without the string landscape may also do this).
How might these theories support Naturalism?
In this analysis I will focus upon the String Landscape and its synthesis with Eternal Inflation, and attempt to show how Naturalism would be supported if these theories are true. Henceforth, I will refer to the conjunction of these particular theories as SLEI.
One might intuit that the truth of SLEI would add weight to the case for Naturalism but, in this analysis I would like to put this intuition on a more rigorous footing. Therefore, I will make use of Bayes’ Theorem (but will not attempt to justify the use of the theorem itself, which has already been formally proven). This is a mathematical formula used for calculating conditional probabilities.[iii] It is particularly useful as a means of calculating posterior probabilities given a set of observations. The particular form of Bayes’ Theorem that I will use is the following:
P(h/e&b) = P(h/b) x P(e/h&b) / ([P(h/b) x P(e/h&b)] + [P(~h/b) x P(e/~h&b)])
Where:
h = the hypothesis under consideration (in this case, that Naturalism is true)
b = the entirety of our relevant background knowledge
e = the entire collection of evidence that is directly relevant to ‘h’
P(h/e&b) = the probability of h given e and b
P(h/b) = the probability of h given only b
P(e/h&b) = the probability of e given h and b
P(~h/b) = the probability of (not h) given only b [the complement of P(h/b) i.e. P(h/b) + P(~h/b) = 1]
P(e/~h&b) = the probability of e given (not h) and b [NB. independent of P(e/h&b)]
So, in this particular case, P(h/e&b) is the probability that naturalism is true given the entirety of our relevant background knowledge, and the entire collection of evidence directly relevant to this hypothesis. Now, I will not attempt to insert values for any of the terms in order to calculate P(h/e&b) directly. Rather, I will instead demonstrate that the truth of SLEI would increase the value of P(h/e&b) – whatever its actual value is – by determining what would happen to the value of P(h/e&b) if we vary some of the other terms in the equation accordingly.
I will now look at each of the terms in Bayes’ theorem for the hypothesis that naturalism is true, and evaluate how the truth of these theories would affect them.
1. Prior Probability
Firstly, P(h/b). In our case, it is the probability that naturalism is true given the entirety of our relevant background knowledge, and before we examine any specific evidence for the truth or otherwise of naturalism – also known as the prior probability that naturalism is true. Now, every cause that has ever been investigated and established by a reliable, truth-finding method (e.g. science) has turned out to be a purely natural one. Moreover, such reliable methods could establish that a supernatural cause exists, if such a cause does in fact exist, and is open to investigation. However, not once have such reliable methods ever shown a cause to be supernatural. So we have:
[A1]
P1: If reliable methods have only ever proven natural causes to exist in our world, then probably every cause in our world is natural.
P2: Reliable methods have only ever proven natural causes to exist in our world.
C: Therefore, probably every cause in our world is natural.
As this is an inductive argument, it does not establish the certainty that there exist only natural causes in the world. Rather, it makes it merely probable. Hence, P(h/b) is high – certainly higher than 0.5 (in order for it be considered probable). It follows that its complement, P(~h/b) [the probability that not naturalism (but supernaturalism) is true given the entirety of our relevant background knowledge] is necessarily low – certainly lower than 0.5.
Now, for the purposes of this analysis I will decide to bracket the non-overlapping sets e and b such that SLEI (if true) is part of the set b and not part of the set e. That is, I will consider that SLEI is a subset of our background knowledge, rather than being a subset of the evidence that is directly relevant to h. This would have the following effect upon the terms within Bayes’ Theorem:
P(h/b) will be higher. One reason for this is that the argument [A1] above is given greater weight, because a reliable method (i.e. science) would have shown even more causes in the world to be natural. Moreover, these causes are particularly important ones, as they explain how our universe came into existence, with its particular physical properties, including microscopic ones.
Another reason why P(h/b) would be higher is that the truth of SLEI would defeat some potential defeaters for h. For example, it would falsify the fine-tuning (or anthropic) argument for the existence of God (of the standard Judeo-Christian variety). According to this argument, the mere fact that the universe allows life to exist in the first place is evidence of intelligent design. For instance, for life as we know it to evolve, it is supposed that there must be an unlikely combination of just the right initial conditions and just the right values of the fundamental physical constants (so-called anthropic coincidences). According to the argument, if any one of the values of up to 26 dimensionless fundamental physical constants[11] from the Standard Model wasn’t extremely close to the actual value we find, then life would not be possible in our universe. Martin Rees reduces this to just 6 dimensionless constants whose values he deems fundamental to present day physical-theory and the known structure of the universe[12].
In either case, the apparent extreme unlikelihood of the universe forming by chance with just the right conditions to allow life is presented as evidence that those conditions were actually set by an intelligent designer in order to produce life. This cosmic intelligence is usually supposed to be God, although it should be noted that the argument doesn’t lead to the designer being any particular god, or even a god at all. It might instead be a team of gods, some other demiurge, a highly advanced universe-creating alien, or any of an infinite number of other possibilities. More formally, we have:
[A2]
P1: If the probability is small enough that our universe is life-bearing by chance alone, then it is more probable that our universe was intelligently designed to be life-bearing.
P2: The probability is small enough that our universe is life-bearing by chance alone
C: It is more probable that our universe is intelligently designed to be life-bearing.
Corollary: The intelligent designer in question is God.
Now, there are a couple of points that should be mentioned with regard to this argument. Firstly, in order to apply the argument in practice, some probability threshold would need to be determined below which we could agree that it is more likely that our universe is intelligently designed to be life-bearing, rather than being so by chance alone. One possible candidate for this threshold might be the Dembski threshold of 1 in 10^150[13]. Although the choice of threshold is moot, unless design can be shown to be impossible (and we should note that science hasn’t ruled out the possibility that our universe was designed by aliens[14], for example), then there must be some threshold below which it becomes more probable that our universe was designed to be life-bearing.
Secondly, with regard to P2, it has not been proven that the fundamental physical constants are in fact so improbably ‘fine-tuned’, or that they needed to be so for our universe to be life-bearing. It may be that there are really only one or two truly fundamental physical constants, and/or the values that these constants could take are constrained to a small set of possibilities. In that case, the total number of possible universes would be relatively small, with at least one of these possibilities being life-bearing (ours). Whether this is or is not the case, we may find that some sort of life would still have been possible in the universe even if the fundamental physical constants were significantly different to those that we find. Victor Stenger has argued along these lines[15].
However, even if we were to accept that our universe is indeed precisely and improbably fine-tuned for life, then we still need not invoke design as the explanation. If SLEI is true, the existence of a multiverse in which all possible combinations of fundamental physical constants (as well as forces, mathematical laws, etc.) will eventuate in some universe or other as part of the string landscape, means that it is guaranteed that a life-bearing universe will come to exist by chance alone (possibly an infinite number of times). This is because every possible solution from the string landscape will come to exist (if the selection is random, as, due to its quantum mechanical nature, it is proposed to be), and our universe is a possible solution within the string landscape that leads to a life-bearing universe. Thus P2 would be false, and the argument [A2] would fail.
One further point worth mentioning is that even if our universe is all that exists (i.e. there is no multiverse), and it is fine-tuned for life, we still cannot legitimately infer a supernatural source. In fact, as Michael Ikeda and Bill Jefferys showed[16], the fine-tuning would actually count against a supernatural origin for our universe. The argument that if a universe fine-tuned for life is improbable on naturalism, then the fact we find ourselves in a fine-tuned universe implies that naturalism is improbable is a confusion of two different types of conditional probabilities. In particular, the fact that an outcome is highly improbable does not imply that the hypothesis that generates that outcome is itself improbable. You need to actually compare the probabilities of obtaining the observed outcome under all hypotheses, and look for the ones that are more probable. Whilst naturalism may still turn out to be improbable on fine-tuning, it may actually be the most probable hypothesis – certainly far more probable than supernaturalism of the standard Christian variety (which is rendered improbable because we would expect God to sustain life without any need for the universe itself to be fine-tuned for life). Secondly, we must do the calculations based upon the evidence that we actually have. This includes the fact that we know our universe contains life, so the possibility of a naturalistic universe with no life is purely hypothetical.
As an aside, this is a particularly egregious example of a corollary that doesn’t follow necessarily or even probably from the conclusion, but which is often tacitly assumed to by Christian proponents of the argument. At the very least it is supposed by them to do a lot of work towards establishing the existence of God, which I think it manifestly fails to do, as getting from the existence of some inscrutable designer to God is actually the hard bit, as the claim for a supernatural designer with all sorts of the amazing and specific powers of God is a much more extraordinary one and thus requires much more extraordinary evidence. This is analogous to Christians thinking that if they can show that certain people or places mentioned in the story of the resurrection of Jesus actually existed, then this does a lot of work toward establishing that the Resurrection actually happened. However, again it is the leap to the supernatural that is the giant and extraordinary leap, and establishing some other mundane historical details in the Bible does virtually nothing to help bridge that gap. For example, if I tell you that I have a friend called John who can levitate at will, then just showing my friend John to you (not levitating) does nothing to prove that John can actually levitate. The mere existence of a friend called John is not at all extraordinary or contentious. The levitation part of my claim is the part that is contentious, and that requires the robust supporting evidence.
Another possible defeater for h is the Cosmological Argument i.e.
[A3]
P1: Everything that begins to exist has a cause
P2: The universe began to exist
C: Therefore, the universe has a cause
Corollary: This cause is God
This is another example of a corollary that doesn’t follow from the conclusion. Moreover, if SLEI is true, then the universe in question would just be our particular universe (as just one part of the multiverse) and the cause in question would be some quantum mechanical universe-generating mechanism. Hence, there would indeed be a cause, but it would be a completely natural one, and the corollary would be falsified. And if we have a multiverse that is eternal (as part of SLEI), then P2 would be false, and the conclusion would not follow (and P1 may be false anyway, even if the multiverse is not eternal, as all we know is that everything we have observed to begin within our universe has a cause, which doesn’t necessarily mean that this concept is meaningful when talking about the beginning of the multiverse as a whole).
Of course, if SLEI is true, we may still be left with no explanation for why the multiverse exists at all, or where the quantum mechanical universe-generating mechanism came from, or why string/M-theory and its universe generating mechanism are as they are. In such a case, we may just have to take this as a brute fact—something that exists necessarily and has no explanation. This is no worse than the God explanation though, where God is taken as the brute fact, and is actually far more plausible and parsimonious.
Since there remains no sound or valid argument for design, P(~h/b) will be lower than P(h/b). This follows from P(h/b) being higher, as P(~h/b) is its complement. In this analysis, P(e/h&b) and P(e/~h&b) will remain unchanged, as I have bracketed b and e such that SLEI (if true) would be part of the set b and not part of the set e.
Now, let’s go back to our formulation of Bayes’ Theorem and determine what effects this will have on the probability that naturalism is true given the entirety of our background knowledge and evidence directly relevant to this i.e.
P(h/e&b) = P(h/b) x P(e/h&b) / ([P(h/b) x P(e/h&b)] + [P(~h/b) x P(e/~h&b)])
I won’t derive this in general, but will instead substitute some sample (and very rough) values into the equation. So, just for the sake of argument, assume that without Greene’s novel theories:
P(h/b) = 0.95
P(~h/b) = 0.05
P(e/h&b) = 0.9
P(e/~h&b) = 0.3
So:
P(h/e&b) = 0.95 x 0.9 / [(0.95 x 0.9) + (0.05 x 0.3)]
= 0.855 / [0.855 + 0.015]
= 0.983 (3dp)
If that were the case, then with Greene’s novel theories, something like the following would result:
P(h/b) = 0.99 [i.e. higher than it was before]
P(~h/b) = 0.01
P(e/h&b) = 0.9
P(e/~h&b) = 0.3
So:
P(h/e&b) = 0.99 x 0.9 / [(0.99 x 0.9) + (0.01 x 0.3)]
= 0.891 / [0.891 + 0.003]
= 0.997 (3dp)
Hence, with these particular values, the truth of SLEI would increase the probability that naturalism is true given the entirety of our background knowledge and evidence directly relevant to this. This is what we would intuitively expect, and is what I was trying to establish, but I won't derive it in general. However, it is at least plausible this is true generally.
This analysis might be repeated by bracketing the non-overlapping sets e and b such that SLEI (if true) are part of the set e and not part of the set b (or even part in b and part in e, if that could be done). I have not done this, but I would expect the results to be similar.
What arguments could be advanced now that those theories are more probably than not true?
Some physicists believe that these cutting-edge theories are more probably than not true because they are powerful and elegant. String/M-theory, for example, combines quantum mechanics and general relativity into a quantum theory of gravitation, and can also incorporate the standard model of particle physics. As such, it is a good candidate for a theory of everything. I think that there is something in this intuition, as the concepts of power and elegance or beauty (as physicists and mathematicians use it) actually encapsulates the idea of explaining a great deal of data with a relatively small set of assumptions.
However, to put this on a firmer footing, I would suggest that the theories are more probably true than not for the following reasons:
1. They are plausible. That is, they follow from and don’t contradict known facts and other good theories of how the universe is. String theory, for example, is quantum mechanical, Lorentz invariant, unitary, contains Einstein’s General Relativity as a low energy limit, and can incorporate the standard model of particle physics. Eternal inflation follows from the theory of cosmic inflation, quantum fluctuations, and symmetry breaking.
2. They are parsimonious. That is, they don’t require us to make up too much out of whole cloth, i.e. there are few completely ad hoc assumptions. Postulating the existence of strings or branes themselves is ad hoc, as is the assumption of additional compacted dimensions, but not much else needs be made up (as opposed to the God theory, which requires masses of ad hoc elements, including the existence of a supernatural God, with all sorts of characteristics and desires, the most powerful mind possible, and all sorts of manoeuvrings to explain away the lack of fit of prediction and evidence). Eternal inflation requires few ad hoc elements, and none is physically implausible.
3. They have good explanatory scope. That is, they predict many facts about the universe that we actually find to be true (and have not made any predictions that have so far been proven to be false). Eternal inflation explains the size, age, evolution, and macroscopic and microscopic structure of the universe, and its apparent fine-tuning for life. By contrast, God exists doesn’t really predict much about the universe; and what you might expect it to predict is not actually found when we look at the evidence.
4. They have good explanatory power. That is, they make the facts that they predict highly probable.
From the scientific point of view, testing predictions that could falsify these theories or otherwise is very difficult, as the energies required to test string theory are huge (but may well be available to us in the future), and finding any direct evidence of other universes in the multiverse is likely to be impossible.
However, there might be indirect ways of testing them. For example, there is some suggestion that one or more of the fundamental constants may have changed during the evolution of our universe[17]. If the fundamental constants can change over time in our own universe, then they are clearly not fundamentally invariant. Since this is a requirement for Eternal inflation, amongst other multiverse theories, then this offers some support to these theories. Another type of indirect support for SLEI is that, if the fundamental constants and other fundamental properties of our universe (e.g. forces, particles, physical laws) are a random selection from what is possible, then we would expect our universe to be only just barely life-bearing, rather than strongly so. An analogy would be a lottery in which only 3 correct balls from 6 are required in order to win a prize. If we pick a random winner, then we would expect them to have only just won a prize (i.e. to have 3 or possibly 4 correct balls), rather than getting all 6 balls correct. When we look at the values of the fundamental constants and other things (such as dark energy) in our universe, it does indeed appear to be the case that the universe is no more bio-friendly than it needs to be.
So, overall, we may have some indirect evidence that SLEI is correct. In addition to this, SLEI also satisfies the criteria usually taken for being a good explanation (plausibility, parsimony, explanatory scope and power).
[1] See: http://en.wikipedia.org/wiki/String_theory
[2] See: http://en.wikipedia.org/wiki/Brane
[3] See: http://en.wikipedia.org/wiki/Supersymmetry
[4] See: http://en.wikipedia.org/wiki/Symmetry_breaking
[5] See: http://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking
[6] See: http://en.wikipedia.org/wiki/Quantum_fluctuation
[7] See: http://en.wikipedia.org/wiki/Cosmic_inflation
[8] See: http://en.wikipedia.org/wiki/Eternal_inflation
[9] See: http://en.wikipedia.org/wiki/Multiverse
[10] See: http://en.wikipedia.org/wiki/String_landscape
[11] See: http://en.wikipedia.org/wiki/Dimensionless_physical_constant#The_Standard_Model
[12] See: http://en.wikipedia.org/wiki/Dimensionless_physical_constant#Martin_Rees.27s_Six_Numbers
[13] See, for example: http://richardcarrier.blogspot.com/2009/05/statistics-biogenesis_01.html
[14] See: http://en.wikipedia.org/wiki/Fine-tuned_Universe#Alien_design
[15] See: http://www.colorado.edu/philosophy/vstenger/Cosmo/FineTune.pdf#search=%22Fine%20tuned%20universe%22
[16] See: http://bayesrules.net/anthropic.html
[17] See, for example: http://arxiv.org/abs/0810.1356
[i] As per Carrier’s definition in “Defending Naturalism as a Worldview: A Rebuttal to Michael Rea’s World Without Design” (The Secular Web: 2003), www.infidels.org/library/modern/richard_carrier/rea.shtml.
[ii] As calculated in Raphael Bousso & Joseph Polchinski, “Quantization of Four-form Fluxes and Dynamical Neutralization of the Cosmological Constant,” High Energy Physics (Theory) 18 Apr 2000, http://arxiv.org/abs/hep-th/0004134v3; and Michael R. Douglas, “Basic results in Vacuum Statistics,” High Energy Physics (Theory) 20 Sep 2004, http://arxiv.org/abs/hep-th/0409207.
[iii] See http://plato.stanford.edu/entries/bayes-theorem/.
Subscribe to:
Posts (Atom)