In a recent comment in Nature, George Ellis and Joe Silk1 employ Karl Popper’s falsifiability criterion of science to demote superstring theory to the realm of pseudoscience. At the heart of the debate lie two questions: Is superstring theory a scientific theory? And if so, how likely is it to be true? Here I will predominantly try to address the second question. I agree with many of the conclusions of Ellis and Silk but will make the case that we can find a more viable reasoning for them by joining Karl Popper’s falsifiability condition with Thomas Kuhn’s analysis of science as a progression of different phases taking us from one paradigm to another. In order to contextualise the discussion of Kuhn’s ideas, I will first take a brief historical detour back to the scientific revolution of the 17th century.
A scientific revolution
The scientific revolution, instigated by such figures as Copernicus, Galileo, Descartes and Newton, contained the revolutionary notion that abstract thought and counter-intuitive reasoning can describe the world (as e.g. famously described by Alexandre Koyré 2). The Aristotelian physics by which the world was understood before the scientific revolution was highly intuitive: all things had a natural place; worldly objects were naturally in a state of rest, and they returned to this state after some time if pushed into motion. The centre of the earth was the natural place for objects containing the elements Earth and Water, while the heavens was the natural place to be for objects made out of Fire, with Air residing in between. And so on. As ridiculous as these ideas may seem to a schooled scientist of today, they appear more intuitive than what was to replace them. For the real scientific revolution was not the replacement of one scientific world view by a better one: It was the replacement of an intuitive world view with a counter-intuitive one reached by abstractions, reasoning, and logic; a world view deduced through mathematics. If one makes the preposterous statement that the moon and an apple are bound by the same laws of physics, and then make predictions based on logical and mathematical deductions from this claim, one can see that what seemed preposterous at the time actually has merit in the real world.
Ever since then, a theoretical physicist engaging with the fundamentals of existence has had to come up with preposterous ideas, work through the logic, and compare the deduced results with reality in order to determine whether or not the speculative abstractions of the theory hold any value. And when reality remains unexplained, as it did e.g. after the Michelson-Morley experiment in 1887, we come up with another preposterous idea, e.g. that the speed of light is a constant all observers agree upon, work through the logic, and see if it accounts for what has been measured, and what else it can tell us about the world.
Thomas Kuhn’s Revolutions, Paradigms, and Crises
Thomas Kuhn famously described the history of science as a repetition of different phases3: Normal Science, Crisis, Revolution, Normal Science, Crisis, Revolution… During the Normal Science phase a certain paradigm defines the scientific field in question, a paradigm which includes a chosen specific direction of logical enquiry together with a set of praxes in how science is performed. This phase runs into trouble when the chosen direction of logical enquiry seizes to produce scientific results, either in the form of making new predictions which can be verified, or in explaining remaining unexplained results. According to Kuhn, science then enters a phase of Crisis. The Crisis is resolved during a revolutionary phase when all preposterous ideas are valid and a new direction of logical enquiry is chosen. When this phase has calmed down and a direction is set, a new phase of Normal Science ensues.
Normal science only makes revelation within an already established world view. Revolutionary science is looking for what this world view might be. Therefore, the latter can be said to be much deeper in its concern for understanding and questioning nature, since the former is merely concerned with the limited realm of the already established view. At the same time, the validity of a world view cannot be established without the meticulous deduction of a Normal Science phase.
It is in the nature of Normal Science to be a phase within which scientists tend to ignore the possibility that a different direction of logical enquiry is possible. Scientists cultivate an attitude that their line of reasoning will ultimately, with correct application of logical deduction, lead to a complete description of everything within their field. I would argue that if we apply Kuhn’s analysis onto the field of fundamental physics, especially elementary particle physics, we see that we have found ourselves in a phase of Crisis for several decades now, maybe since the mid 1980’s. Little to no new testable predictions have been found to be true since then, and neither has any new physics been discovered which was unaccounted for by the theory. The latest results from the Large Hadron Collider at CERN make this very clear, where the only remaining prediction from the Standard Model of particle physics found its confirmation in the Higgs boson, but to great disappointment, nothing else has yet emerged.
Superstrings and Falsifiability
There seem to be a conflict between one group of physicists, recently voiced by Ellis and Silk, on the one hand, and proponents of Superstring Theory on the other. At the heart of this conflict lies different views on falsifiability, a notion for demarking science from pseudoscience made famous by Karl Popper4. It is still considered a benchmark for a scientific statement to at least in principle be possible to prove wrong. When scrutinised, Popper’s definition of science has not really held up (one may say that it has been falsified), but it can nevertheless be used as a minimal demand for the scientific. To claim, as many do, that superstring theory is not falsifiable is a clear abuse of the term. Even the axioms of superstring theory are falsifiable, e.g. that elementary particles are not point-particles but one-dimensional strings. If we could probe short enough distances, strings could be–or not be–observed. Supersymmetry, as well as seven extra dimensions, are also falsifiable predictions of the theory. Based on axioms of superstring theory, as well as special and general relativity and quantum field theory, and with a lot of hard work by some of the smartest theoretical physicists of the last couple of generations, what has been produced is possibly the most elegant and all-encompassing theory in the history of science.
The statement that the theory is not falsifiable rather comes from the fact that all the predictions of the theory are either considered so hard to test that they will never stand the test of experiment, as in the case of existence of strings or extra dimensions, or they are not unique features of superstring theory, as in the case of supersymmetry. To claim that the theory does not produce falsifiable predictions is to claim that the theory is not a scientific theory, which is not fair in the case of superstring theory. To instead claim that it cannot be falsified in practise is a completely different statement all together, but it is not what Popper was referring to.
A more useful understanding of the scientific project comes from bringing Popper together with Thomas Kuhn. A theory should not only be falsifiable (that is a minimal demand), but it also needs to be compared with nature regularly. For any line of enquiry can yield a beautiful, elegant, and logically consistent theory with (in principal) falsifiable predictions, but not all lines of enquiry will represent nature. Based on any set of reasonable axioms, a theory can be built, with plenty of falsifiable predictions. And if enough hard deductive work is performed by enough competent theorists, it is almost guaranteed to be a theory both elegant and consistent.
Here, theorists tend to fall into two camps. One claims that if a theory is found to be internally inconsistent, it would deem a theory dead. The more a theory stands the test of rigorous deduction without running into contradictions with itself, the more likely then it is to be true. Here we replace nature by the theory itself as a testing ground for the theory, and the falsifiability condition has been replaced (or at least supplemented) by a condition of internal consistency.
However, if a theory is found to have inconsistencies, it usually does not kill it, instead it spurs some hard thought and eventually a way to avoid the inconsistencies found. Here, superstring theory stands as the grand example. The first inconsistency it encountered was that it demanded the existence of faster-than-light particles, and even if this was not a formal inconsistency, it was a highly unwanted prediction. The way around this turned out to be the postulation of space-time being 23-dimensional. This was later reduced to 10 dimensions with the help of supersymmetry. Then the theory ran into trouble again: it was no longer a unique theory but a set of five different theories. This crisis was again bridged by the postulation of an extra dimension, which turned the theory into the 11-dimensional M-theory with strings now replaced by N-branes. Thus, inconsistencies do not seem to kill a theory, as long as hard working theoretical physicists find ingenious ways to resolve them.
It should be clear in this scenario that elegance and consistency does in no way guarantee truth. It does not even give a hint on probability of the theory to be true. Therefore, the “practical falsifiability condition” should actually be a very important one, since any set of axioms has the possibility for an elegant and consistent description of the world. In order to assess the probability for a theory to be true, at least some steps in the deduction leading to the theory should be tested against reality. The probability for a theory to be correct is otherwise not higher than the probability that its axioms are correct, and without testing, this remains a guess. In that sense, superstring theory is not more likely to be true today than when it was first conceived as a quantum theory for gravity, forty years ago.
I claim that any reasonable axiomatic starting point has the potential of becoming an elegant and consistent theoretical framework. Some make the claim that the consistency is itself such rigorous a test that it renders the truth of the theory highly probable. I was instead arguing that the consistency and elegance of a theory is rather a testament to the amount of ingenious work that went into its construction.
To claim that a theory is not scientific because it does not produce statements which are falsifiable in practise, is not an honest use of these terms. But this article makes the claim that in order to assess the truth of a theory, it has to be falsifiable also in practise, and tested against nature during its construction. Else, its probability for being true is anyone’s unfounded guess.
Thinking of Things, 2015.