This is the first instalment in the Philosophy of science series. The series overview is available here.
Isn’t it Straightforward? Method and Subject matter in Science
It seems rather straightforward to many; science is about the investigation of the world around us by appealing to experience, – rather than conceptual abstractions – testing hypotheses against the data, and ruling out alternative explanations. We are led by the data, and that’s that.
While the general thrust of that sentiment is true, the methods used in scientific inquiry, and the subject matter with which it is concerned have changed significantly, over the course of the past century alone.
Philosophies of science are not the driving forces of these changes, but rather articulate what’s going in the scientific landscape at the time and shape it to some extent. I’ll do a brief tour through the major changes that have characterized our understanding of the scientific process in the past 100 years. This is not a work of history – of how things came about – but rather, important highlights in the intellectual world that give the lay of the land.
Logical Positivism – Verify yourself
A natural question to ask – and one that is certainly still relevant today – is what sorts of things can be investigated scientifically? Perhaps not everything is a good candidate.
Logical positivism was an exciting movement with a dull name. It had as its aim the determination of the appropriate subject matter of scientific investigation so as to separate the wheat from the chaff.
The concern was that some statements are meaningful, and others are not, by virtue of how their truth status is determined. If a statement’s truth status cannot be determined through a conclusive procedure, then it is meaningless and not something that can be investigated scientifically.
They sought to build up a foundation of basic propositions, both logical and empirical, from which the truth of more complicated statements could be derived. Without this foundation, the verifiability of more complex statements could not be guaranteed.
|What there is||How we know it||How it’s represented in language/logic|
|The domain of the Logical||Necessary – cannot be false||A priori – known by reason, without experience|
|Analytic – true by definition (i.e., by the stipulated meaning of words)|
E.g., ‘Brothers are male siblings’
|The domain of the Empirical||Contingent – truth/falsity depends upon the state of affairs in the world||A posteriori – known through experience|
E.g., ‘Peter is sitting in his chair’
|Synthetic – true by experience|
E.g., ‘Graham is my brother’
Logical positivists wanted to find necessary truths that we know both through reason (a priori), and through experience (a posteriori). Scientists are engaged in the same search, though they may not use the same terms. The implicit assumptions of many scientists is that they are engaged in the search for the laws of nature that are necessary, out of which they can derive true statements about the contingent states of affairs in the world that we observe (through experience) around us.
The way in which propositions can be true is important to establish. Analytic statements are true by virtue of their meaning, whereas synthetic statements are true to the extent that they correspond to our observations of reality.
With analytic statements, truth is derived from the essential definitional components of terms and the relationship of particular cases to larger sets to which they belong. For example, if we know that it takes a certain amount of time to travel to another city at a certain speed, we will be able to derive many other truths from it. I will be able to derive statements about the weight of the vehicle, the route taken, perhaps what make and model it is, whether the city is near Toronto, or Montreal, etc. The truth of these further statements are conditional upon the truth of the more basic claim, and are derived from it.
On the other hand, to determine the truth of an empirical statement, we have to engage in repeated observation. We can never verify an empirical claim, because there is never enough data to do so, in principle. The truth of the claim is always probabilistic. The data is always whatever we have observed – that which is unobserved can never be quantified and ruled out.
The positivists wanted to find a way of avoiding this problem, by making statements about the empirical world expressed in such a manner that they are always derived from more basic statements that are necessarily true.
To do so, they would attempt to build a consistent (i.e., generating no contradictions) logical system out of fundamental axioms.
Axioms are self-evident truths that are simply intuited. An example would be the law of identity – that each thing is identical to itself (A=A and not B). It would be absurd if this were false. Another is the Principle of Sufficient Reason – that everything must have a reason or a cause for existing. Otherwise, how did it come to be?
You might be thinking that these things hardly need stating, but they are all important if, in our derivations, we produce something that contradicts one of those fundamental principles.
|1. Things which are equal to the same thing are equal to one another|
|2. If equals are added to equals, the wholes are equal.|
|3. If equals are subtracted from equals, then the remainder are equal.|
|4. Things which coincide with one another are equal to one another.|
|5. The whole is greater than the part.|
An example of axioms and their use in a formal system: Euclid, the Ancient Greek geometer’s axioms, out of which he built a system of geometry that lasted for centuries.
They assumed that if a system is built out of fundamental axioms such as these, then if we run into a contradiction in deducing a further statement from those premises, the assumptions must not have been correct.
One of the reasons for such a rigorous method was a strong anti-metaphysical bias. Concepts are typically understood to have essential properties that make them what they are. For example, saying that a human is defined by the features of rationality and animality, a ‘rational animal’, or that water is H20, and so on and so forth.
The logical positivists believed that they could not confirm whether statements like 2+2=4 are true, because then they would have to assume that the numbers 2 and 4 exist in reality in some ideal form, and have the real properties that we assume that we do when we use them.
They believed that statements such as 1+1=2 are true because they are derivable from a set of foundational assumptions that are true by definition. True because they are intuitively basic, and to deny them would result in an absurdity.
Therefore, once the fundamental axioms are stipulated (that cannot be false because for them to be so would amount to a contradiction), everything else is supposed to follow suit. All other definitions of terms in the system are purged of a ‘meaning’ according to their properties, and everything is defined in terms of a rule of inference.
We can then hypothetically build up a system of testable statements from a set of rigorous assumptions, and demarcate the scientific from the unscientific.
Wittgenstein and Gödel use logic to show that there’s more to life, and even logic, than logic
Both members of the Vienna Circle of logical positivists, Ludwig Wittgenstein and Kurt Gödel were two very different, but similarly unique characters. Oddly enough, they departed in different ways with the beliefs of the logical positivists, but their conclusions had similar upshots.
Both would show that the dream of the positivists was fundamentally flawed. Wittgenstein by showing that formally logical systems have nothing to say about the real world, and Gödel by demonstrating that formally consistent systems are not possible, according to their own assumptions.
In his first major work, the Tractatus Logico-Philosophicus, Wittgenstein tried to follow some of the principles of the positivists to their logical conclusion.
As we saw above, formally consistent logical systems must derive all true statements from their own axioms. These axioms are operational statements describing rules for combining primitive variables to make more complicated statements, and proofs (i.e., arguments) from them.
According to the positivists, and to others of a similar persuasion, such as Bertrand Russell and Alfred North Whitehead, there is no room for a fundamental maxim that describes the ‘meaning’ of one of the terms, for this statement is not derivable from the self-consistent rules of the system. In other words, it would not follow from the assumptions of the system itself. This is the whole thing they were trying to avoid, for there seemed no good reason why ‘meanings’ could be tacked on willy nilly in order to make the assumptions of the system work. They couldn’t be guaranteed to be true if they can be pulled out of thin air.
To make this a little more clear, it would be like saying the sculpture was entirely the product of one artist, even though someone else had in fact sculpted the head. The head is an addition that does not follow from the principle creative action of the work, and so, it could no longer be called the work of that artist. Furthermore, without the head, the artwork would be considerably incomplete. Thus, the addition of a stipulated ‘meaning’ for a term in the system truly would be an ad hoc addition, tacked on the side by the system-builder, for no other reason than to make the system work. How could he or she claim the truth of that system? It would just be their own pet project – their truth, an oxymoron.
Wittgenstein attempted to construct just such a logical system, coming to the conclusion that the entire category of the ‘meaning’ of a term is something that cannot be proven, but can only be shown through the formal representation of statements.
He reaches this conclusion by demonstrating that the meaning of terms cannot be verified by any rigorous method – but what he really means is the methods and assumptions of the positivists. That which is meaningful (i.e., verifiable) is on display in his exhaustive account of logic. The rest can only be shown.
He has enough sense to recognize that quite literally everything else is what anyone with their feet on the ground would refer to as meaningful.
“6.52 – We feel that even when all possible scientific questions have been answered, the problems of life remain completely untouched. Of course there are no questions left, and this itself is the answer. Ludwig Wittgenstein, David Pears, and Brian McGuinness, Tractatus Logico-Philosophicus, Routledge Classics (London : New York: Routledge, 2001), 89.
6.521 – The solution of the problem of life is seen in the vanishing of the problem. (Is not this the reason why those who have found after a long period of doubt that the sense of life became clear to them have then been unable to say what constituted that sense?)
6.522 – There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical.
7 – What we cannot speak about we must pass over in silence.”
Gödel had decidedly different views than Wittgenstein about the nature of mathematics and logic, but reached a conclusion that similarly undermined the logical positivists program.
Recall that logically consistent systems are the goal, made up of axioms, rules of inference, and theorems. Axioms are self-evident truths that are intuited. All other ‘meanings’ of terms in the system are purged, and everything is defined in terms of the rules of combining primitive elements in the system.
A symbol is both a variable, but also has content – its properties.
The system is therefore supposed to be an entirely formal set of processes. Results (combinations of elements) are obtained by appealing to recursive functions (repeated application of a rule) up the chain, starting from the axioms. This is what an algorithm is – a sequence of operations that tells you what to do at each step.
In sharp contrast to Wittgenstein, Gödel took his proof of the incompleteness of formal systems to be a demonstration of the power of intuition to grasp the reality of pure ideas.
Gödel showed that if a formal system of arithmetic is consistent, then it’s possible to construct (derive) a proposition from it that is both true, but unprovable in the system. He did it with two proofs that are both true, but generate a contradiction.
The first theorem states that a consistent formal system will generate statements which can neither be proved nor disproved from within the assumptions of the system. The second is that the consistency of a consistent formal system can’t be proved within the assumptions of the system itself.
This technical meaning of inconsistency is that a system yields a logical contradiction, which is what the second theorem shows. If something follows from the theory that cannot be explained by higher-order assumptions, then it does not do what it sets out to do – derive truths from its axioms, which are taken to be fundamentally true.
The second theorem is correct because the proposition derived from it is both true – since it follows from the assumptions of the consistent system (statements are generated from axioms that are true, therefore there can be no contradictions) – and is unprovable from within the system since it is generated from it, but is itself a disavowal of the system. Otherwise known as a liar’s paradox statement, such as ‘this sentence is false.’
What this means is not that the truth of a formal system is unprovable, but that a proof cannot be done from within the assumptions of the system itself. The proof must be ad hoc – another random, and unconnected statement must be posited in order to explain, or demonstrate the truth of the assumptions of a formal system.
Any formal system – and you might say by extension, scientific theory, philosophy, or worldview – does not contain the means to prove truths that are derived from its own assumptions. For this reason, the theories themselves remain inconsistent – they generate statements that cannot be proven by the methods and assumptions of the system.
Some have extended Gödel’s conclusions about formal systems to the field of artificial intelligence and philosophy of mind. Any attempt to conceive of the mind, or of artificial intelligence in a mechanistic fashion – as a system with axioms, whose algorithms constitute the operations of the mind or computer – is doomed to failure. This is because – as we saw above – formal systems can always generate statements from their assumptions that follow from the rules of the system, but which the system can nonetheless not understand.
Returning to the implications for philosophy of science.
What Wittgenstein and Gödel’s endeavours show are the following, age-old truths.
In order to generate scientific statements and theories that enable prediction and accurate description of the world, we need to be able to verify our claims. However, we can never verify empirical statements, because of the problem of induction. It goes as follows: to know (with deductive certainty) that the sun will rise tomorrow, we make reference to our past experiences of the sun rising, and naturally conclude that it will do so again in the future. This empirical (experiential) proof is fundamentally uncertain, because we lack the experiential means to verify that our experience in the past proves to be a completely reliable guide in the future.
That is why it is important to have a theory built on the foundations of intuitively, and necessary true axioms, from which we can deduce further statements, and confirm whether our experience jives with fundamental truths of logic.
However, Gödel and Wittgenstein showed that we cannot build a logically consistent system, out of which verifiable (and therefore, meaningful, and worthy of scientific investigation) statements can be derived. The whole edifice upon which the distinction between meaningful and meaningless rests would be rendered, well, meaningless.
To determine what are valid scientific objects of study, and the appropriate means of studying them, much more would be needed.