Uncoupling Truth from Elegance in Scientific Theory

abstract-art-wallpapers-wallpaper-images-array-wallwuzz-hd-wallpaper-2506In a recent comment in Nature, George Ellis and Joe Silk1 employ Karl Popper’s falsifiability criterion of science to demote superstring theory to the realm of pseudoscience. At the heart of the debate lie two questions: Is superstring theory a scientific theory? And if so, how likely is it to be true? Here I will predominantly try to address the second question. I agree with many of the conclusions of Ellis and Silk but will make the case that we can find a more viable reasoning for them by joining Karl Popper’s falsifiability condition with Thomas Kuhn’s analysis of science as a progression of different phases taking us from one paradigm to another. In order to contextualise the discussion of Kuhn’s ideas, I will first take a brief historical detour back to the scientific revolution of the 17th century.

A scientific revolution
The scientific revolution, instigated by such figures as Copernicus, Galileo, Descartes and Newton, contained the revolutionary notion that abstract thought and counter-intuitive reasoning can describe the world (as e.g. famously described by Alexandre Koyré 2). The Aristotelian physics by which the world was understood before the scientific revolution was highly intuitive: all things had a natural place; worldly objects were naturally in a state of rest, and they returned to this state after some time if pushed into motion. The centre of the earth was the natural place for objects containing the elements Earth and Water, while the heavens was the natural place to be for objects made out of Fire, with Air residing in between. And so on. As ridiculous as these ideas may seem to a schooled scientist of today, they appear more intuitive than what was to replace them. For the real scientific revolution was not the replacement of one scientific world view by a better one: It was the replacement of an intuitive world view with a counter-intuitive one reached by abstractions, reasoning, and logic; a world view deduced through mathematics. If one makes the preposterous statement that the moon and an apple are bound by the same laws of physics, and then make predictions based on logical and mathematical deductions from this claim, one can see that what seemed preposterous at the time actually has merit in the real world.

Ever since then, a theoretical physicist engaging with the fundamentals of existence has had to come up with preposterous ideas, work through the logic, and compare the deduced results with reality in order to determine whether or not the speculative abstractions of the theory hold any value. And when reality remains unexplained, as it did e.g. after the Michelson-Morley experiment in 1887, we come up with another preposterous idea, e.g. that the speed of light is a constant all observers agree upon, work through the logic, and see if it accounts for what has been measured, and what else it can tell us about the world.

Thomas Kuhn’s Revolutions, Paradigms, and Crises
Thomas Kuhn famously described the history of science as a repetition of different phases3: Normal Science, Crisis, Revolution, Normal Science, Crisis, Revolution… During the Normal Science phase a certain paradigm defines the scientific field in question, a paradigm which includes a chosen specific direction of logical enquiry together with a set of praxes in how science is performed. This phase runs into trouble when the chosen direction of logical enquiry seizes to produce scientific results, either in the form of making new predictions which can be verified, or in explaining remaining unexplained results. According to Kuhn, science then enters a phase of Crisis. The Crisis is resolved during a revolutionary phase when all preposterous ideas are valid and a new direction of logical enquiry is chosen. When this phase has calmed down and a direction is set, a new phase of Normal Science ensues.

Normal science only makes revelation within an already established world view. Revolutionary science is looking for what this world view might be. Therefore, the latter can be said to be much deeper in its concern for understanding and questioning nature, since the former is merely concerned with the limited realm of the already established view. At the same time, the validity of a world view cannot be established without the meticulous deduction of a Normal Science phase.

It is in the nature of Normal Science to be a phase within which scientists tend to ignore the possibility that a different direction of logical enquiry is possible. Scientists cultivate an attitude that their line of reasoning will ultimately, with correct application of logical deduction, lead to a complete description of everything within their field. I would argue that if we apply Kuhn’s analysis onto the field of fundamental physics, especially elementary particle physics, we see that we have found ourselves in a phase of Crisis for several decades now, maybe since the mid 1980’s. Little to no new testable predictions have been found to be true since then, and neither has any new physics been discovered which was unaccounted for by the theory. The latest results from the Large Hadron Collider at CERN make this very clear, where the only remaining prediction from the Standard Model of particle physics found its confirmation in the Higgs boson, but to great disappointment, nothing else has yet emerged.

Superstrings and Falsifiability
There seem to be a conflict between one group of physicists, recently voiced by Ellis and Silk, on the one hand, and proponents of Superstring Theory on the other. At the heart of this conflict lies different views on falsifiability, a notion for demarking science from pseudoscience made famous by Karl Popper4. It is still considered a benchmark for a scientific statement to at least in principle be possible to prove wrong. When scrutinised, Popper’s definition of science has not really held up (one may say that it has been falsified), but it can nevertheless be used as a minimal demand for the scientific. To claim, as many do, that superstring theory is not falsifiable is a clear abuse of the term. Even the axioms of superstring theory are falsifiable, e.g. that elementary particles are not point-particles but one-dimensional strings. If we could probe short enough distances, strings could be–or not be–observed. Supersymmetry, as well as seven extra dimensions, are also falsifiable predictions of the theory. Based on axioms of superstring theory, as well as special and general relativity and quantum field theory, and with a lot of hard work by some of the smartest theoretical physicists of the last couple of generations, what has been produced is possibly the most elegant and all-encompassing theory in the history of science.

The statement that the theory is not falsifiable rather comes from the fact that all the predictions of the theory are either considered so hard to test that they will never stand the test of experiment, as in the case of existence of strings or extra dimensions, or they are not unique features of superstring theory, as in the case of supersymmetry. To claim that the theory does not produce falsifiable predictions is to claim that the theory is not a scientific theory, which is not fair in the case of superstring theory. To instead claim that it cannot be falsified in practise is a completely different statement all together, but it is not what Popper was referring to.

A more useful understanding of the scientific project comes from bringing Popper together with Thomas Kuhn. A theory should not only be falsifiable (that is a minimal demand), but it also needs to be compared with nature regularly. For any line of enquiry can yield a beautiful, elegant, and logically consistent theory with (in principal) falsifiable predictions, but not all lines of enquiry will represent nature. Based on any set of reasonable axioms, a theory can be built, with plenty of falsifiable predictions. And if enough hard deductive work is performed by enough competent theorists, it is almost guaranteed to be a theory both elegant and consistent.

Here, theorists tend to fall into two camps. One claims that if a theory is found to be internally inconsistent, it would deem a theory dead. The more a theory stands the test of rigorous deduction without running into contradictions with itself, the more likely then it is to be true. Here we replace nature by the theory itself as a testing ground for the theory, and the falsifiability condition has been replaced (or at least supplemented) by a condition of internal consistency.

However, if a theory is found to have inconsistencies, it usually does not kill it, instead it spurs some hard thought and eventually a way to avoid the inconsistencies found. Here, superstring theory stands as the grand example. The first inconsistency it encountered was that it demanded the existence of faster-than-light particles, and even if this was not a formal inconsistency, it was a highly unwanted prediction. The way around this turned out to be the postulation of space-time being 23-dimensional. This was later reduced to 10 dimensions with the help of supersymmetry. Then the theory ran into trouble again: it was no longer a unique theory but a set of five different theories. This crisis was again bridged by the postulation of an extra dimension, which turned the theory into the 11-dimensional M-theory with strings now replaced by N-branes. Thus, inconsistencies do not seem to kill a theory, as long as hard working theoretical physicists find ingenious ways to resolve them.

It should be clear in this scenario that elegance and consistency does in no way guarantee truth. It does not even give a hint on probability of the theory to be true. Therefore, the “practical falsifiability condition” should actually be a very important one, since any set of axioms has the possibility for an elegant and consistent description of the world. In order to assess the probability for a theory to be true, at least some steps in the deduction leading to the theory should be tested against reality. The probability for a theory to be correct is otherwise not higher than the probability that its axioms are correct, and without testing, this remains a guess. In that sense, superstring theory is not more likely to be true today than when it was first conceived as a quantum theory for gravity, forty years ago.

Conclusion
I claim that any reasonable axiomatic starting point has the potential of becoming an elegant and consistent theoretical framework. Some make the claim that the consistency is itself such rigorous a test that it renders the truth of the theory highly probable. I was instead arguing that the consistency and elegance of a theory is rather a testament to the amount of ingenious work that went into its construction.

To claim that a theory is not scientific because it does not produce statements which are falsifiable in practise, is not an honest use of these terms. But this article makes the claim that in order to assess the truth of a theory, it has to be falsifiable also in practise, and tested against nature during its construction. Else, its probability for being true is anyone’s unfounded guess.

 Thinking of Things, 2015.

  • martillu

    Hey! I was very happy when I saw you posted about this. I posted on Facebook the article from George Ellis and Joe Silk in Nature and was a bit sad that nobody commented on that. I think we need to discuss about this.

    I agree with you in many points as well as with the authors of that article. I think you add a nice argument there with Kuhn’s ideas. First of all, I think you weaken Popper’s criterion by saying that the axioms are testable and therefore the theory is testable. I may not like it so much but at least these axioms are well defined. So, yeah in principle the axioms could be tested. However, I am not sure we can ever test the axioms of string theory (even in the future…well maybe in the super-very far future): Even cosmic rays which have been accelerated by (probably) galactic objects are far away of the energy needed to test it. So, for us, humans on the Earth, this is nowadays like postulating the existence of God inside that scale.

    It is not the case that the standard model explains everything. String theory could do sharp predictions about current experiments since there are quite a few things that the standard model cannot explain. Examples are what are dark mater and energy or why neutrinos have mass. I am not sure whether string theory says anything about neutrinos being Majorana or Dirac but that would be a nice prediction too!

    The problem is that the theory does not make those predictions because it predicts (nearly) everything and many things can be ‘accommodated’ to it.

    Ah, and string theory started as a theory for hadrons. They just kept on getting spin 0 and 2 particles and at some point they realised that that theory could be well suited for gravitation….

    Lluis

    • http://www.thinkingofthings.com/ ToT

      I do not weaken the falsification criterion–it is weak. I don’t think Popper had in mind the possibility that people would suggest theories which could not be tested in practice. I don’t weaken the falsification criteria, I strengthen it, by saying exactly this; theories must be testable in practice to make any claims for truth.

      In general I think physicists put too much emphasis on falsification. It is too broad a criterion to be really useful, and this is the problem Ellis and Silk run into in their argument. Which makes it a weak argument nevertheless leading to conclusions I am sympathetic towards.

  • Axel

    Ha, that old topic, eh? 😉

    The sole question here seems to be about the interpretation of Popper. For many physicists falsifiability is probably the same as “regularly tested by nature”. In principle there is a difference between the two. But you are speaking about a physical theory here, not any scientific theory

    • http://www.thinkingofthings.com/ ToT

      Oh, I do not remember that from TTWP, interesting how the mind works… I will have a lot more to say about Kuhn and Popper in String Theory in the next (planned) season, which will partly delve into science. This text was just a response to that Nature article.

      The point with Popper is that Superstring Theory is a new thing that was not quite around when Popper formulated his criterion. That a theory could be developed that was not practically possible to test was just unheard of. Science, and Physics till that point were still predominantly observation and experiment driven… So while I agree with you, I don’t think most superstring guys necessarily do, which I have seen expressed in many places, not in the least by Motl.

  • Axel Cholewa

    I’m actually happy that the LHC, up to this point, does not give any results beond the standard model. The whole philosophy behind today’s particle physics – build ever bigger things to reaveal ever smaller things – has its limits. And because the theories it helped to establish are pretty much cemented but do not lead anywhere new, this exerimental method might still serve for precision measurements of the established theories, but not as a way of looking for new ones.

    • http://www.thinkingofthings.com/ ToT

      What you describe is pretty much the definition of a Kuhnian crisis… And I completely agree. Hopefully I will go to CERN and agree some more soon 😉