Introduction

One of the intrinsic features of the scientific process is that it leads to modifications to previously accepted knowledge over time. Those modifications come in many forms. They may involve simply tacking on new discoveries to an existing body of accepted knowledge without really contradicting prevailing theoretical frameworks. They may necessitate making subtle refinements or adjustments to existing theories to account for newer data. They may involve the reformulation of the way in which certain things are categorized within a particular field so that the groupings make more sense logically, and/or are more practical to use. In rare cases, scientific theories are replaced entirely and new data can even lead to an overhaul of the entire conceptual framework in terms of which work within a particular discipline is performed. In his famous book, The Structure of Scientific Revolutions, physicist, historian, and philosopher of science, Thomas Kuhn referred to such an event as a “paradigm shift.” [1],[2]. This tendency is a result of efforts to accommodate new information and cultivate as accurate a representation of the world as possible.

The “scientists have been wrong before” argument

However, sometimes opponents of one or more areas of mainstream science attempt to recast this self-correcting characteristic of science as a weakness rather than a strength. Anti-GMO activists, anti-vaxxers, young earth creationists, climate science contrarians, AIDS deniers and many other subscribers to unscientific viewpoints have used this as a talking point. The argument is essentially that the fact that scientists revise and sometimes even eliminate old ideas indicates that scientific knowledge is too unreliable to take seriously. They reframe the act of refinement over time as a form of waffling. Based on this, they conclude that whatever widely accepted scientific conclusions they don’t like should therefore be rejected.

Why the “Scientists Have Been Wrong Before” Gambit Exists

The main function of the “scientists have been wrong before” gambit is to serve as a post-hoc rationalization for embracing ideas that are neither empirically supportable nor rationally defensible, and/or rejecting ones that are. Pseudoscience proponents want to focus on perceived errors in science in order to downplay the successful track record of the scientific method. In doing so, they fail to account for the why and the how of scientific transitions. This is also ironic and hypocritical because pseudoscience has no track record worth speaking of at all. Scientific theories are updated when other scientists better meet their burden of proof, and when doing so serves the goal of better understanding the universe. In contrast, the aforementioned gambit is a self-serving attempt to side step the contrarian’s burden of proof in order to resist change. 

The argument is disingenuous for a number of reasons, not least of which is that it ignores the ways in which scientific knowledge typically changes over time. Previous observations place constraints on the specific ways in which scientific explanations can change in response to newer evidence. Old facts don’t just magically go away. In order to serve their purpose, reformulations of scientific theories have to account for both old facts and the new. Otherwise, the change would not be an actual improvement on the older explanation, which presumably accounted for at least the older data, but not the newer.

Facts, Laws, and Theories

Before further unpacking this point, I should clarify my use of terminology: in this context, I’m essentially using the term fact to denote repeatedly observed data points. These are independent of the explanations proposed for their existence. Alternatively, one might say that facts report. Scientific Laws are essentially persistent data trends which specify a mathematically predictable relationship between two or more quantities. On the other hand, Scientific Theories are well-supported explanations for why some aspect of the natural world is the way it is and/or how exactly it works. They are consistent with the currently available evidence and make testable predictions that are corroborated by a substantial body of repeatable evidence. In short, facts and laws describe; theories explain.

For example, evolution is both a fact and a scientific theory. This because the fact that populations evolve and the modern scientific theory of evolution (which describes how it occurs) are separate but related concepts. Evolution is formally defined as a statistically significant change of allele frequency in a population over time. *An allele is just genetics jargon for a variant of a particular gene. That is descent with modification. It happens all the time. We witness it constantly. It’s not hypothetical. It’s not speculation. It’s an empirical fact.

The theory of evolution, on the other hand, is an elaborate explanatory framework which outlines how evolution occurs. This includes the mechanisms of natural selection, genetic drift, gene flow, mutation (and much more), and it makes many testable predictions about a wide range of biological phenomena. In science, a theory provides more information than facts or laws, because it connects them in ways that permit the generation of new knowledge. I’ll say it again: facts and laws describe; theories explain.

The Correspondence Principle

It’s true that scientific ideas can be wrong or incomplete and that scientific theories can change with new evidence. However, the argument that this justifies rejecting well-supported scientific theories just because one doesn’t like their conclusions ignores the constraints that prior experimental results place on the ways in which scientific knowledge can realistically change in the future. People advancing the Scientists have been wrong gambit are typically vague and imprecise in their usage of the term, “wrong.” It is often implied that wrong is being used in the sense of “totally factually wrong,” rather than merely incomplete, which is inconsistent both with scientific epistemology and with the history of science. It’s at odds with scientific epistemology, because knowledge in science is generally conceived of in a fallibilistic and/or probabilistic manner rather than in a binary one [12]. It’s at odds with the history of science because it is not generally the case that the data used to support a theoretical claim is entirely 180 degrees mistaken, but rather that the theory is being replaced by a more complete one which, in many cases, simply looks differently. Sure, theories can be expanded and the meaning and implications of experimental data can be conceptually reframed, but new theories can’t be in direct contradiction with the aspects of the old one whose predictions corresponded with experimental data. Unless it can be shown that all prior data consistent with the predictions of the older theory was either fraudulent or due to systematically faulty measurements, this is simply not a viable option.

Another way to put it is that old facts don’t go away so much as their explanations can change in light of newly discovered ones.

This is reflected in what is called the correspondence principle [8].

A Paraphrasing of Bohr’s conception of the Correspondence Principle

Although originally associated with Niels Bohr and the reconciliation of quantum theory with classical mechanics, it illustrates a concept which applies in all areas of science. Essentially, the correspondence principle says that any modifications made to classical mechanics in order to account for the behavior of matter in the microscopic and submicroscopic realms must agree with the repeatedly verified calculations of classical physics when extended to macroscopic scales [9]. However, the overarching concept of older (yet well-supported) scientific theories becoming limiting cases of newer and broader ones is inextricable from advancement of scientific knowledge more generally.

This is why there exist certain facts that will probably never be totally refuted, even if the theories which explain and account for them are subsequently refined and/or placed within the broader context of newer and more comprehensive explanatory frameworks. This is necessarily the case because any candidate for a new scientific theory which proves inferior to the old framework insofar as accounting for the empirical data would be a step backward (not forward) in terms of the degree to which our leading scientific theories map onto the real world phenomena they purport to represent.

As Isaac Asimov put it:

“John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together” [16].

The Story of Gravity

Another one of my favorite examples of this is gravity. Our understanding of gravity has undergone multiple changes over the centuries, but none of those updates ever overturned the empirical observation that massive bodies reliably undergo an apparent acceleration towards other massive bodies in a mathematically predictable relationship. Aristotle was wrong about the mass of an object determining the rate at which it fell, and explained it in teleological terms, whereby certain objects were thought to have more “earth-like” properties, so that it was in their nature to belong on the ground [10]. But he didn’t dispute the basic observation that objects fell. Isaac Newton, who developed the inverse square law relationship for gravity, did not develop a theory for why matter behaved this way. He merely described it [11]. Rather than being satisfied with spooky action at a distance, prolific French physicist, astronomer, and mathematician, Pierre-Simone Marquis de Laplace conceptualized gravity in terms of classical field theory, whereby each point in space corresponded to a different value of a gravitational field, such that the field itself was thought of as the thing acting locally on a massive object [5].

The modern theory of gravity (Einstein’s General Relativity) explains it by positing a four dimensional space-time manifold capable of degrees of curvature surrounding massive bodies. In this theory, space-time tells matter how to move, and matter tells space-time how to curve [6]. Like the theory of evolution, general relativity has made many testable and falsifiable predictions that have come to fruition. Moreover, we know that GR cannot be the end of the story either, because the rest of the fundamental forces of physics are better described by quantum field theory (QFT), a formulation to which certain features of GR have notoriously not been amenable [7].

However, not one of these refinements contradicted the basic observations of massive bodies undergoing apparent accelerations in the presence of other massive bodies. Mathematically, it can be shown that Laplace’s formulation was consistent with Newton’s; the difference was in how it was conceptualized. Similarly, in situations involving relatively small masses and velocities, solving the Einstein Field Equations yields predictions that agree with Newton’s and Laplace’s out to several decimal places of precision. And although we don’t yet know for sure what form a successful reconciliation of GR and QFT will ultimately take, we know that it can’t directly contradict the successful predictions that GR and QFT have already made. This exemplifies the point that there exist constraints on the particular ways in which scientific theories can change.

Parsimony and Planetary Motion

I should note that concurrent to the progression of our scientific knowledge of gravity were changes in our understanding of planetary motion, because it demonstrates how the expansion of predictive power is not the only criterion governing theoretical transitions in science. More specifically, the Copernican model of the solar system didn’t actually produce calculations of superior predictive accuracy to the best Geocentric models of his time. Tycho Brahe’s formulation of Ptolemaic astronomy was more accurate. Although Brahe ultimately rejected Heliocentrism, Copernicus’s arguments intrigued him because the his model seemed less mathematically superfluous than the system of epicycles required to make Geocentrism work, yet it yielded results that were more or less in the same ballpark [13]. In other words, what stood out about Copernicus’s model was that, even though it wasn’t quite accurate, it accounted for a lot with a little. It was more parsimonious.

Many of the arguments against the Copernican model had more to do with Aristotelian physics than with the discrepancies in the resulting calculations, some of which were themselves a consequence of Copernicus’s assumption that orbits had to be circular, which was due in part to the philosophical notion that circles were the perfect shape. These problems were of course later resolved by the work of Johannes Kepler and Galileo Galilei; the former used Brahe’s own data to deduce that planets moved in elliptical orbits and swept out equal areas in equal times, whereas the latter formulated the law of inertia and overturned much of the Aristotelian physics upon which many arguments against the Copernican view were based [14]. In combination, Kepler and Galileo laid down much of the groundwork from which Isaac Newton would revolutionize science just a generation later.

The moral of the story, however, is that there are times when parsimony directs the trajectory of further scientific inquiry. It’s not always directed by expanding predictive power. A certain amount of theorizing in science involves what can essentially be understood as a form of data compression. Ultimately, the consistency of theory with empirical reality is the end game, but if a concept can explain more facts more simply and/or with fewer assumptions, then it may be preferred over its leading competitor. It’s certainly preferable to lists of disparate facts lacking any common underlying principles, because science isn’t just about describing empirical phenomena, but about discovering and understanding the rules by which they arise.

This touches on the principle of Occam’s Razor which, insofar as it applies to science, can be roughly paraphrased as the idea that one ought not to multiply theoretical entities beyond that which is needed in order to explain the data [15]. Putting it another way, the more ad hoc assumptions one’s hypothesis requires in order to work, the more likely it is that at least one of them is mistaken.

Or as Newton put it,

We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. Therefore, to the same natural effects we must, as far as possible, assign the same causes” [11].

 

Occam’s Razor is not a rule in science so much as it is a heuristic that sometimes proves useful. Ultimately, our ideas must agree with nature’s results first and foremost. Deference to the empirical world is always paramount, and the universe is under no obligation to meet our arbitrary standards of simplicity or aesthetic preferences, but some prospective theories are better than others at compressing our understanding into more cogent sets of concepts.

Incommensurability

In addition to introducing the idea of paradigm shifts in scientific advancement, Kuhn’s The Structure of Scientific Revolutions (TSoSR) also introduced the concept of incommensurability to describe the relationship between newer and older scientific paradigms. Initially, he introduced this as an umbrella term for any and all conceptual, observational, and/or methodological discrepancies between paradigms, as well as semantic differences in the use of specialized terminology. Kuhn’s own conception of incommensurability evolved considerably in the years following his publications of TSoSR, eventually restricting its applicability to problems with the translation of certain terminology common to both paradigms due to semantic differences arising from the transition to a new conceptual framework [3].

However, the basic idea was essentially that the methods, concepts, and modes of communication involved in disparate scientific paradigm are different enough that anyone from one paradigm attempting to communicate with someone from another would necessarily be speaking at cross-purposes, because they lack common measure. Even the observations themselves are thought to be too theory-laden for concepts and problems to be adequately translated across the theoretical boundaries of the pre and post phases of a scientific revolution. Kuhn himself even used the analogy from Gestalt psychology known as a Gestalt shift [4]. Here’s an example:

Hill, W. E. “My Wife and My Mother-in-Law.” Puck 16, 11, Nov. 1915

Do you see a young woman looking away, or an old woman looking down and to your left? Can you switch back and forth between perspectives? The meaning of any reference to the “nose” of the figure depends on whether one is speaking within the young woman or old woman paradigm. The placement and thickness of the lines does not change during gestalt shifts. What changes is the way in which their meaning is understood.

Analogously, the precise meaning of scientific statements depends on the theoretical framework in terms of which they are being made. The empirical facts that the theories seek to explain have not gone away (though newly obtained data may very well be forcing the change). What changes significantly is the way in which the meaning of the data is conceptualized, and the way in which new questions are framed.

Incommensurability as an attack on the scientific method

Some opportunists might seek to co-opt this notion of incommensurabilty to attack the epistemological integrity of the scientific process itself by exaggerating the degree to which new paradigms invalidate previous scientific knowledge, and to downplay their regions of predictive overlap. However, such attacks would necessarily be weakened by having to account for the constraints the correspondence principle places on which aspects of a scientific theory can change and/or be invalidated by a paradigm shift. To conflate a conceptual change in science with the invalidation of all facets of an older theory is to implicitly presuppose an anti-realist relationship between theory and the empirical phenomena to which it refers.

This is circular reasoning.

The unstated assumption is that no meaningful correspondence relationship exists between scientific concepts and the aspects of the empirical world they purport to represent, therefore changes in how terms are used and how problems are conceptualized precludes the preservation of facts and predictions an earlier model got right. As we saw in the earlier examples of the correspondence principle in action, this is demonstrably false. Many facts and predictions of older theories and paradigms are necessarily carried over to and/or modified to be incorporated into newer ones.

Concluding Summary

Scientific knowledge changes over time, but it does so in the net direction of increasing accuracy. This is one of the strengths of the scientific method: not one of its weaknesses. Most attempts to reframe this as a weakness (invariably via the use of specious mental acrobatics) ignore the constraints necessarily placed on the ways in which scientific theories can change or be wrong.

Many important revolutions in science involve conceptual changes which do not contradict all of the facts and predictions of the older theory, but rather reframe them, restrict them to limiting cases, or expand them to more general ones.

The preservation of certain facts and predictions which are carried over from older theories to newer ones (because the older ones also got them right) can be understood in terms of the correspondence principle.

The validity of the concept of incommensurability between temporally adjacent scientific paradigms is restricted to terminological, conceptual, and sometimes methodological differences between pre and post scientific revolution phases, but does not in any way contradict the correspondence principle.

The fact that scientific ideas can be wrong in principle does not mean that the particular ones the contrarian using this gambit dislikes will be among the discarded, nor that the ways in which it could conceivably be wrong could vindicate the contrarian’s desired conclusion.

Consequently, citing the observation that “scientists have been wrong before” is never a rationally defensible basis with which to justify rejection of scientific ideas which are currently well-supported by the weight of the evidence; only bringing new evidence of comparable quality can do that. If the contrarian is not currently in the process of gathering and publishing the evidence that would supposedly revolutionize some area of science, then they are placing their bet on an underdog based on faith in a future outcome over which they have no influence, and for which they have no rational basis for expecting. This is no more reasonable than believing one is going to win the lottery based on the observation that other people have won the lottery before, and then not even bothering to buy a ticket.  

You don’t know what aspects of our current knowledge will turn out to be incorrect, nor which will be preserved. That’s why the maximally rational position is always to calibrate one’s position to the weight of currently available scientific evidence, and then simply leave room for change in the event that newer evidence arises which justifies doing so.

References

[1] Kuhn, T. S., & Hawkins, D. (1963). The structure of scientific revolutions. American Journal of Physics31(7), 554-555.

[2] Bird, A. (2004). Thomas KuhnPlato.stanford.edu. Retrieved 4 January 2018, from https://plato.stanford.edu/entries/thomas-kuhn/

[3] Sankey, H. (1993). Kuhn’s changing concept of incommensurability. The British Journal for the Philosophy of Science44(4), 759-774.

[4] What Impact Did Gestalt Psychology Have?. (2018). Verywell. Retrieved 4 January 2018, from https://www.verywell.com/what-is-gestalt-psychology-2795808

[5] Laplace, P. S. A Treatise in Celestial Mechanics, Vol. IV, Book X, Chapter VII (1805), translated by N. Bowditch (Chelsea, New York, 1966).

[6] Astronomy, S. (2017). Einstein’s Theory of General RelativitySpace.com. Retrieved 4 January 2018, from https://www.space.com/17661-theory-general-relativity.html

[7] relativity?, A. (2018). A list of inconveniences between quantum mechanics and (general) relativity?Physics.stackexchange.com. Retrieved 4 January 2018, from https://physics.stackexchange.com/questions/387/a-list-of-inconveniences-between-quantum-mechanics-and-general-relativity

[8] Bokulich, A. (2010). Bohr’s Correspondence PrincipleStanford.library.sydney.edu.au. Retrieved 4 January 2018, from https://stanford.library.sydney.edu.au/archives/spr2013/entries/bohr-correspondence/

[9] Bokulich, P., & Bokulich, A. (2005). Niels Bohr’s generalization of classical mechanics. Foundations of Physics35(3), 347-371.

[10] Pedersen, O. (1993). Early physics and astronomy: A historical introduction. CUP Archive.

[11] Newton, I. (1999). The Principia: mathematical principles of natural philosophy. Univ of California Press.

[12] Fallibilism – By Branch / Doctrine – The Basics of Philosophy. (2018). Philosophybasics.com. Retrieved 5 January 2018, from http://www.philosophybasics.com/branch_fallibilism.html

[13] Blair, A. (1990). Tycho Brahe’s critique of Copernicus and the Copernican system. Journal of the History of Ideas51(3), 355-377.

[14] Copernicus, Brahe & Kepler. (2018). Faculty.history.wisc.edu. Retrieved 5 January 2018, from https://faculty.history.wisc.edu/sommerville/351/351-182.htm

[15] What is Occam’s Razor?. (2018). Math.ucr.edu. Retrieved 5 January 2018, from http://math.ucr.edu/home/baez/physics/General/occam.html

[16] Asimov, I. (1989). The relativity of wrong. The Skeptical Inquirer14(1), 35-44.

Share

1 Comment

Robert Fowler · January 6, 2018 at 11:39 pm

Another quote that applies to this in a subtle way: “The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth.” – Niels Bohr

Comments are closed.