Falsifying Paradigms to settle for Confirmation – Method and Subject Matter in Science (Phil of Sci #1-2)

This is the second half treatment of the first topic in the Philosophy of science series – methodology and the identification of the territory that is properly scientific. The series overview is available here.

Falsifying Paradigms

In the last piece, we looked at the attempt to demarcate the scientific from the unscientific, by building a logically rigorous system on unshakable foundations.


In spite of all the aforementioned difficulty, the logical positivists were onto something important. There are major differences between the social sciences and natural sciences, Marxist Historical Materialism and the careful study of history, Astrology and Astronomy, and Psychiatry, Psychology, and Psychoanalysis – though all have claimed the moniker of science. Karl Popper was interested in finding principles that allow for the clear demarcation between such fields of inquiry. He added the falsifiability criterion to distinguish meaningful and meaningless, and weed out unscientific fields of study.

As we saw, we can’t generate a set of rigorous statements that admit of a process of verification that is conclusive. But this seems to leave us on shaky ground. It is pretty obvious that some types of statements do not readily admit of verification, and others do. If scientific investigation is really just about verifying vague statements through repeated exposure, don’t all sorts of things count as science, and what are they really explaining after all?

Popper recognized this problem, and came up with a way of separating the wheat from the chaff. In order for some claim to be properly scientific, it must be expressed in a way that it could theoretically be disproved.

He argued that the way in which the movement of history was described in Marxist Historical materialism, and the interpretation of a person’s self-reports in psychoanalysis were both done in a manner in which any data point could be taken as reinforcing, rather than disconfirming the theory. Therefore, the theories themselves were, from an explanatory sense, highly dubious. They could not be falsified, because every data point was further proof for the theory.

Unfortunately for Popper, it turns out that not everything can be falsified. In fact, many key elements of any scientific theory cannot be. It seems that unobservable entities (i.e., logical terms, relations, many of the theoretical posits, such as particles, forces, and larger macro structures) play an indispensable role in any theoretical framework, so we need to find a way to account for them.

Additionally, falsification doesn’t leave us with much, since there is much more to a theory’s viability than whether it can be falsified. We might ask of a brand new conjecture based on a few experiments whether it has been adequately tested yet. Just because it can be falsified, much more is needed in order to show that it is viable.

Popper recognized this problem, and came up with a way of separating the wheat from the chaff. In order for some claim to be deemed properly scientific, it must be expressed in a way that it could theoretically be disproved.

How does science progress from one paradigm to another?


Thomas Kuhn’s landmark work – The Structure of Scientific Revolutions – is a lighting rod for criticism, though its conclusions have been misinterpreted by defenders of the naturalist philosophical understanding of science, and postmodern advocates alike.

Kuhn argued that the body of scientific knowledge is not constantly growing in a roughly linear fashion. Contrary to this picture, he suggests that science operates within paradigms of thought. Scientific investigation occurs and accumulates within the assumptions of the paradigm until conflicts and problems arise that cause some researchers, or outsiders to challenge those assumptions.

Rather than a smooth transition from, say, Ptolemaic to Copernican and Galilean astronomy, or Newtonian physics to General and Special Relativity, and Quantum Mechanics, the move is a disjointed one. Kuhn argued that the axioms, and assumptions of the latter are not directly related to those of the former, and cannot be derived therefrom.

What this implies is that scientific truth is highly context-dependent. That is to say, dependent upon its fundamental assumptions, which are non-empirical, and thus not scientifically provable.

Though many have taken Kuhn’s views to suggest a high degree of relativism and discontinuity in the scientific enterprise, I think a more guarded conclusion is warranted. Kuhn himself only thought he had shown that science does not progress linearly, nor are prior theoretical frameworks neatly translatable into their successors, often because the assumptions between them are so different.

But this understanding of science is indeed that of the philosopher – a highly idealized, and abstract one. Scientific inquiry is simply a human practice that proceeds in fits and starts. It is a loose collection of sometimes well-tested, unfounded, partial, incomplete hypotheses and bodies of confirmatory evidence, and raw data. It wouldn’t make sense to think of transitions as smooth, and orderly.

It is quite obvious that though Newtonian physics can’t be easily translated into Quantum theory or Special and General relativity, the latter, taken together, better fit the data, and allow for the creation of more precise technology, and practical applications. No one would doubt that science progresses in this fashion – greater practical usefulness.

Lastly, the philosopher W.V. Quine, and the physicist Pierre Duhem arrived at similar conclusions, about how verification occurs in science – now known as the Quine-Duhem theory of underdetermination.

It is the idea that no hypothesis can be tested in isolation – there are always supporting assumptions and hypotheses that cannot be tested in a given situation. Therefore, we can always reject some hypotheses on the grounds that the background assumptions are inadequate.

The implications for the philosophy of science are that it is never simply a matter of gathering more data – one can never have enough for a certain kind of conclusion.

The Quine-Duhem theory of underdetermination leads to the conclusion that no hypothesis can be tested in isolation – there are always supporting assumptions and conjectures that cannot be tested in a given situation. Therefore, we can always reject some hypotheses on the grounds that the background assumptions are inadequate.

Credit: Joshua Sortino – Unsplash

Settling for Confirmation


Taken together, these developments suggest that far more modesty is required in the interpretation of what science consists in, and the truth status of a particular finding, or the theory to which it belongs.

The model of scientific inquiry that seems to emerge from these considerations is one that combines the best of both worlds – the rationalist and the empiricist.

The Hypothetico-deductive model of science is highly pragmatic. We test hypotheses against the larger theory to which they belong, and deduce further testable hypotheses from prior ones that have been already been ‘confirmed’ (by multiple experiments, to the best of our knowledge).

It combines induction and deduction, includes the criterion of falsifiability, and adds replication. In order for an isolated finding – or the larger body of theory to which it belongs – to be said to be ‘true’, the experimental conditions present that generated the data must be replicated. That way, we can ensure that we’re not missing possible alternative explanations for the results we find.

The standard that emerges out of this is ‘provisional confirmation’.

Bas Van Fraassen’s view – constructive empiricism – reflects this standpoint. Observation is required for a claim to be scientifically meaningful, but unobservables can be used in the theoretical framework, because it is impossible for them not to be, and the claims that result are true only as far as the observations go.

An unobservable would be something like a logical term, a relation, forces, an electron, a gluon, or a quark – things that we have no experience of, but posit in order to make sense of our theoretical frameworks.

From an epistemological perspective then, science is the process of doing our best to test all possible alternative explanations for a set of related phenomena, and then deriving further statements from those assumptions.

A scientific statement, or theory is the best that we have because it is provisionally adequate to explain what we experience, predict future events, and create pieces of technology (really just tools) to suit our purposes.

A scientific statement, or theory is the best that we have because it is provisionally adequate to explain what we experience, predict future events, and create pieces of technology (which are really just tools) to suit our purposes.

What happened to Explanation and Truth?


Taking what we’ve seen so far from the highlight reel of 20th century philosophy of science, it would seem that notions of explanation and truth have been jettisoned altogether. It appears as though we’ve settled for an understanding of science as a set of tools that we use to explain phenomena under a very particular – some would say, restrictive – form of presentation, in ways that we can understand so that we can make predictions interest us, and fashion tools that make it easier to get things done. Pure pragmatism, and an understanding of the nature of things that rules out asking what they are at a deeper level, settling instead for how they relate to one another, the paths they may take, and how forces impinge upon them, in a highly abstract manner.

Though you can see this attitude reflected unconsciously in the deeply cynical attitudes of educated people everywhere today, you will also just as often hear people defend the ‘truth’ of scientific statements, and science itself with ferocity. Indeed, the embrace of science as the answer to all our problems is a widely held attitude.


With the advent of mechanistic thinking about the world, the reigning philosophical framing of the natural sciences led to a greater and greater – almost exclusive – preoccupation with the domain of efficient causation – or the material agents that act as causes. Increasingly, scientists would ignore the formal arrangement of things, and ceased to describe the tendencies of objects to behave in certain ways, opting instead for a mathematical description of the relations between ever smaller, and more abstract entities.

In this series, I will argue that this view is restrictive. While allowing for a rich and rigorous understanding of a slice of reality in great detail, it is partial.

Scientific fields of study have their proper object domain, methods of experimentation, testing and modelling that are mathematically rigorous, and to varying degree verifiable and generalizable through experience and repetition.

However, what is often let out are the details about assumptions. It turns out that the nature of the question you ask determines, in part, the type of answer you will receive. Furthermore, the methods you use confine your analysis to that which they can detect. Far from relativism or skepticism about science and its relationship to knowledge, such observations show the extent to which one’s philosophical starting points, methodology, and use of reason can narrow or broaden one’s horizons. With breadth and depth in the intellectual toolkit, much more of reality can be captured, in its causal, and ontological complexity.

In recent decades, we’ve seen a shift in thinking in some fields. Rather than a determinism of fundamental forces and particles, with the advent of quantum physics we see one that is informed by thinking about latent potential and dynamism; instead of the properties of wholes and systems as the sum of their parts, there is a growing recognition of the importance of the formal arrangement of physical component pieces to the properties that appear in larger structures, in neuroscience, and ecology.

An approach to science that views interrelations, in-built tendencies, and form as key to understanding causal relations and the properties of objects should begin to make itself known in many fields of study, yielding more fruitful insights.

I will argue in this series that science certainly does have to do with explanation and with truth, but with a different kind than is often assumed.

In the next instalment, I’ll look at the much-vexed concepts of correlation, and causation.

8 comments

Leave a Reply to The Legacy of Social (De)Construction: Scepticism from the Greeks to Kant, Marx and their Post-…descendants – Phil of Sci # 5-1.2 – Peter Copeland Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s