(Philosophy of Science series #2)
This is the 2nd instalment in the Philosophy of science series. In the links below, you can find the Series overview, and parts 1 and 2 of the first Instalment on Methodology and Subject matter:
It is a common saying that ‘correlation does not equal causation’, but how often is that dictum adhered to in practice? A fair amount of science claims to have explanatory status when it is in fact rooted in correlational findings, not causal ones. What is the difference, and why does it matter?
Though higher levels of ice cream consumption and purchases of fans are both correlated with higher temperatures, we know that they may not necessarily have a common cause, and the two are themselves not necessarily causally related.
In short, in order to establish the presence of a causal relationship between entities, we require more than the presence of common factors, and proximal spatial and temporal relationships to hold between them.
In this piece, I’ll look at the key concepts involved in defining and inferring causation.
What’s in a Cause?
Generally, the concept of causation is considered to involve both contiguity and antecedence. The cause must be contiguous (sharing a common border) with the effect, and it must precede the effect in space and time.
We can see this readily from both any ordinary, or technical example. I can say that the hockey stick was the cause of a goal because it preceded contact with the puck and the net, and there was contact between the two, somewhere along the chain of events.
David Hume’s (1711-1776) writing on causation was influential in setting the stage for the tug of war between competing understandings of the concept that has marked the modern era. His account was a significant step in the overturning of the Aristotelian account of causation that held that a cause has a necessary connection to its effect – i.e., when the cause is present, the effect must follow – and that this connection is demonstrable through classical logic (i.e., entailed by the properties of the terms in an argument).
On the contrary, he argued that a cause could be no more than observed regularity between phenomena that involved mostly empirical demonstration.
In these two definitions, Hume notes that there must be a necessary connection between cause and effect that holds for all similar circumstances; it wouldn’t likely represent the discovery of a general causal law if contact between stick and puck did not generate similar effects. He also adds a ‘counterfactual’ element to the definition – a ‘what if’ criterion that allows us to ask whether the effect would have occurred provided there was no cause. By posing this question, we can ask whether we have accounted for the presence of other candidates to explain the effect.
The necessity that Hume refers to in his definition is different than that in the older view. His necessity is only provisional. For Hume, we only ever have experiential knowledge of the associated phenomena that we imagine exist in a necessary causal connection to one another. We never experience anything like a ’cause’, just consecutive events. That is because we never ‘experience’ a mechanism that connects them; we only describe the relations between them.
Immanuel Kant (1724-1804), and many others were troubled by this from the get go, for it seemed absurd to conclude that a cause could not be derived from experience. There would seem to be no rational basis for science, for it could never be shown that a cause is anything more than observed, patterned regularity. Kant would argue that causation is a necessary form of experience, contributed by the mind in a way that structures perceptual reality, and thereby makes it coherent.
Through this notion – that the mind projects its contents onto the world of experience, rather than receiving pure information from it – Kant’s way of thinking has significantly impacted our common sense understanding about the reality of objects. In turn, this has changed our understanding of concepts themselves, even the ones we use to describe what on first blush appear to be very straightforward and observable, such as a material cause and effect. Kant’s ‘Copernican revolution’ contributed to the now common belief that concepts are themselves just names we give to collections of like experience.
In the 20th century, the field of metaphysics was on the ropes, and many of the brainiest philosophers were interested in the project of ‘naturalizing’ philosophy. Though there are many forms of this endeavour, in some way it entails the redefinition of almost all of our concepts and properties – both empirical and logical – into statements about the causal relations that hold between things. In its various forms, it is thus a claim about the nature of reality, and how we come to know it. All of our concepts can be boiled down to something like ‘natural kinds’ (Water as H20, or gold as its chemical compound, for example), that are explained by their fundamental parts, and the way in which they causally relate to one another. The way in which we know can be reduced to a causal account of the acquisition of beliefs and the justifications we provide for them.
David Lewis (1941-2001) is an important figure for our purposes because he further emphasized the role of counterfactual reasoning in establishing causal relationships.
He reasoned that the only definite way to establish a causal relationship is by ruling out alternative explanations. On Hume’s account, we can only ever be sure that the regularity we observe contains all of the information we need to understand the cause-effect relationship. But, we have no way of ruling out whether we are missing a hidden variable, or attributing a general, law-like causal relationship between things when in fact it is only conditional on certain common features of the environment in which we have been conducting the observations. By posing logically rigorous ‘what if’ questions about possible states of affairs, we can say something more meaningful about the properties of entities that exist in the world, and the events with causal force that make a difference to them.
On this account, in order to say that a causal relationship exists in a sense that is transferrable to other similar situations, we have to be able to answer yes to the statement ‘If A had not occurred, C would not have occurred.’
This sort of method is useful in historical explanation. We can ask – if so and so hadn’t been born, or such and such event hadn’t of happened, would things have been much different? By reasoning this way, we can approximate the causal impact of a particular event, in relation to some effect that is to be explained by it.
Statistics and the challenge of Causal inference
Statistical explanations describe the strength of correlations between entities and events (variables). The extent to which correlated phenomena co-vary is a key to understanding the causal strength of their relationship. That is to say, how much change in one variable is related to the same kind, and amount of change in the correlated variable.
Though statistical explanations might sometimes appear causal, they remain correlational in the absence of additional assumptions.
It is intuitively confusing that during some boardgames that last for hours, you may never roll a particular combination of the dice. This seems unlikely. Though on closer inspection it may not be. The laws of probability are not generalized statements that describe causal relationships per se. Rather, they express the range of possible outcomes that may arise in a particular situation, given the assumptions about the factors that can influence those outcomes.
When you throw a die, the chance that you will roll a particular number is 1/6. But, if the die is rolled 20, 30, 40, or 50 times, the likelihood that you won’t roll a given number decreases considerably. This is not because the way in which the roll of a die is related to a particular outcome has changed. Rather, the assumptions about the question we want to answer, and the circumstances that affect it have been altered.
The meaning of a statement like ‘there is a 50% chance that X’, is only true relative to the prior assumptions, which may be only partially known.
Therefore, abstract, and general statistical, and correlational types of reasoning are always inadequate to describing the particular features of a given situation, and the causes at play in them. These are only inferred based on past, and analogous experience.
In other words, statistical or correlational statements do not support causal conclusions, unless they happen to express a causal relationship by happenstance, due to the fact that a correlation is necessarily present in every causal relationship – it goes without saying.
For our purposes then, statistical methods are tools used to approximate the strength of the relationship between variables that are candidates for causal explanation. They are tools to use in the search for a cause, but do not provide the blueprint itself.
Getting to the Bottom of Probability – The Quantification of Prior Assumptions
Bayesian techniques have simplified the search for the right assumptions, by assigning probability to a given set of circumstances.
Bayesian inference relates the probability of an event to a given set of circumstances. By outlining the relevant features of the initial circumstances that can affect the likelihood of some future event after a cause is introduced, Bayesian reasoning allows us to pose better questions.
-Pr(A|B) is the conditional probability of A on B, or the probability of A given B, or the probability that A is true if B is true
-Pr (A) is the prior probability of A
The discipline of statistics has taken the regularity view to be paradigmatic for a considerable period, it being the field of study in which the phrase ‘correlation ≠ causation’ originated.
Despite the foregoing, over the course of the latter half of the 20th century, computer scientists such as Judea Pearl developed an algebraic understanding of causation through the use of causal diagrams. These models chart the hypothesized process from cause-effect, taking into consideration as many relevant variables as possible, so as to separate causation from correlation. Through such methods, researchers are able to assign weights to the causal influence added by each node in the process.
We all know that there are many causal and temporal steps between a gene and its phenotypic representation, for example. A causal diagram allows for the mapping of the hypothesized causal nodes in a diagram across space and time.
The interesting part is the addition of a counterfactual question, represented algebraically, and quantified. If we want to know the effect of broccoli consumption on cholesterol, we need to know the value of all the other hidden causes that distort data about studies on broccoli consumption and cholesterol. For example, the presence of other diet choices and exercise, or genetics, and other life choices.
The Causal diagram allows researchers to quantify the unknown variable, by representing it algebraically.
Through careful manipulations, modellers are able to eliminate spurious correlations from the model, and assign causal strength to each node in the diagram.
On this account then, a cause is not only observed regularity, or something defined as that difference maker, without which the effect could not have been. It combines both.
Causal diagrams are a kind of tool – plug in the variables and your question, and out come probabilities that describe the likelihood of each node playing a causal role on the effect.
This model shows a clever way of mapping the many pathways involved in moving down complex causal chains from A to Z, and trying to account for all of the possible factors that are added in and end up confounding explanations along the way.
Though in terms of what it means for our understanding of causation, it may not add more to what we have already determined as important for understanding causation, itself. For, after ruling out confounding variables, and answering counterfactual questions about what might have been had things been different, we can never be sure that we have identified a cause. In order to do so, we would have to manipulate – or approximate the manipulation of – all of the possible circumstances of the cause-effect relationship, to ensure that all possible hidden causes are controlled for. Such an endeavour involves charting and quantifying the structural relationship that holds between all nodes in the experimental environment that attempts to capture the picture from cause to effect.
The Search for a Mechanism
There is a difference in describing the stages in a sequence, and the explanation of the process that underlies them.
A causal model, such as the one outlined above, can be applied as a framework to any observational level of investigation. A model may attempt to enumerate all of the factors in the relevant area of inquiry that could play a role in the cause-effect process, and assign weights to the causal force applied at each level. However, when we look for an explanation of the process inherent to the causal activity at each node, we are left with the further attempt to reduce it to yet a smaller, more fundamental level of explanation.
This is where the concept of mechanism becomes all important. There have been many, and will likely be many more ways of defining mechanism, each taking something from the context in which they are drawn from, or applied to. If we look back at the history of materialist philosophy, we see atoms, corpuscles, energy, fundamental particles, and so on as the elemental building blocks of the deterministic process that they are supposed to be the original elements of.
Yet in the main, a mechanism involves the presence of a structure, that through its organization and stability through time, performs a function. At the bare minimum, the concept involves change from some state of affairs to a new state after the process that is enacted by, or through the mechanism.
Though this seems like a necessary step to find an answer to an empirical question about causation, it is not so straightforward.
The idea of causation implies that there is some state of affairs, A that is altered by a causal force to produce a new state of affairs, B. The nature of the process reveals that we must posit the existence of entities, in order for us to speak meaningfully about causation.
We cannot avoid the question by assuming that there are no entities in the world other than fundamental, static elements – algorithms, fundamental particles, quarks, gluons, etc. Conversely, we cannot assume that those elements are simply in motion, and that this is the cause of all the variation we see.
These views end up denying all of reality except for the ultimate level, which is either in motion, or held static.
Neither position can be true, for either horn of the dilemma presents an insurmountable challenge.
If there is only one entity with causal force in the world, then change is either an illusion, or it is the only constant.
If there is only one entity in the world, then there could be no change, because there would be nothing to alter, and nothing to alter anything into. Just stasis. However, things change all the time, so there cannot be simply stasis.
If there is only one kind of entity in the world, it could also be argued that it is constantly in flux, and that thus flux is the only constant; there is only continuous alteration of the one substance, through combination, or motion, and thus nothing stable. However, things change into and out of being all the time, and are held together by their form.
It could be postulated that the structural organization of whatever fundamental material building block that we currently think is most basic is that which gives something its structure, and that therefore, change can still occur – in and out of ultimately vague and undefinable objects. This is perhaps closest to the contemporary view.
But this too does not seem to provide a solution. For, there seem to be infinitely many ways in which we could classify how the fundamental building blocks of reality are organized, and that there are therefore no distinct entities, just the illusory perception that there are, a by-product of our system of classification. The fact that we arbitrarily draw the line between a hand and the desk that it is lying on, is our ultimately arbitrary way of seeing the connection between the fundamental building blocks of reality that are in fact the same in the hand as they are in the desk. We are once again back at square one.
Causation – a Metaphysical concept that requires form and matter, act and potency
A sensible way to account for the fact that change occurs is to posit that there is both something that stays the same, and something that changes through events in the world. Aristotle thought of reality as being – that which is. Being can exist either as actuality, or as potential to manifest some other nature (act and potency).
When something undergoes a change, part of it remains the same, and part of it changes. Something actual causes the change to occur, bringing about that which was only latent in the object, as potential. We might describe these potential features as ‘virtually’ but not actually present until they are brought out by an actualizing cause. The range of persistent possible changes that a thing can undergo are what make up its essence or nature.
Quantum reality is famously indeterministic at some level – particles are not in a given state at any time, and their future state is unpredictable and undetermined until they become manifest through measurement of some sort. Furthermore, even the fundamental particles are themselves described and defined as the potential-for-x, though in a quantified way.
This is very much like Aristotle’s theory of metaphysics that classifies both matter and form, actuality and potentiality as real features of the world. Pioneer of quantum mechanics, Werner Heisenberg figured as much.
In the above quote, Heisenberg notes that Aristotle’s concept of ‘prime matter’ is akin to the concept of ‘energy’ in physics. Prime matter is unactualized potential, that takes on actual existence when something causes it to, wherein it takes on a specific form, distinct from all of the potential forms that it might take and has ‘virtually’ in the quantum realm of ‘prime matter’, we may speculate.
To illustrate the point, a seed is potentially a tree because of its essential nature, but at any given time, it is only ever actually some particular manifestation of the range of what it could be potentially.
The move between the quantum realm of undetermined possibility and the everyday world of classical physics that we inhabit from moment to moment can be interpreted as reflecting the major divisions of being in Aristotelian and Thomistic philosophy – being in potential, and being in actuality.
Rather than claim to weigh in on the scientific elements of quantum and classical physics, it is useful to ask – what kind of answer would be needed to settle our initial question about the nature of causation?
The simple answer would be – it is a philosophical question, and so demands a philosophical answer. I would think that no satisfactory answer can be given that does not incorporate both empirical and logical considerations. For, there can be no empirical answers without recourse to the principles of logic, which are certainly not empirical in nature. Kant’s dictum – “thoughts without content are empty, and intuitions without concepts are blind” – is as true today as it was when he uttered it, and seems likely to remain so.
I think this is really the crux of the matter, in fact. As mentioned in the first instalment, a significant amount of attention in 20th century Analytic philosophy of language and science was devoted to the project of attempting to make the foundations of logic empirical. That is to say, to give a definition of logical principles – such as the principle of non-contradiction, the law of identity (A=A), and so on – a grounding in observational statements about the nature of reality as we can confirm it by appeal to our experience.
This began with mathematically-inclined philosophers like Bertrand Russell, ramped up with Ludwig Wittgenstein’s attempt to show that language, and therefore logical principles, was made up of elementary building blocks that give a pictorial representation of the empirical reality they represent.
W.V.O. Quine was very much on this path, but he ended up demonstrating that a clear distinction between a priori and a posteriori truths could not be made, for the meaning of words – and thereby logical notation – is not derived from the purely abstract categories of the mind, or from sense experience, but rather from the whole system of assumptions and interrelations that comprise a language.
Many have taken the findings of these philosophers to mean that metaphysical questions are very dubious indeed. I tend to think that it is a conclusion you would reach if you were trying to smell a sight, or as William Lane Craig likes to say, trying to find wood with a metal detector.
Science identifies the vehicles of change (change agents), but does not answer the necessarily opaque question of what causes something else, what changes from one thing into another, and so on. These are necessarily metaphysical questions of a philosophical nature, but altogether objective, nonetheless.
The traditional problems of the metaphysics of causation seem to me to suggest, at the very least, the inevitability of doing metaphysics, both in every day, and theoretical and applied scientific contexts. At the very most, the logical, common sensical, and most rigorous empirical findings suggest that there are real entities, irreducible to forces, relations, or elementary particles as we find them in the world. They are composites of both form and matter, defined by the range of potential latent in their physical makeup – taken together, their essence and purpose.
This is the 2nd instalment in the Philosophy of science series. In the links below, you can find the Series overview, and parts 1 and 2 of the first Instalment on Methodology and Subject matter.
 “An Enquiry Concerning Human Understanding” by David Hume, in John Locke, George Berkeley, and David Hume, eds., The Empiricists (New York: Anchor Books/Doubleday, 1990), 362.
 “Naturalism; or Living with One’s Means,” in W. V. Quine and Roger F. Gibson, Quintessence: Basic Readings from the Philosophy of W. V. Quine. Ed. by Roger F. Gibson, 1. Harvard Univ. Press pbk (Cambridge, Mass.: Belknap Press of Harvard Univ. Press, 2008), 276.
 Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, 2019, 228.
 For the classic arguments from change and limitation, see the first two books of Aristotle’s Physics. Aristotle and Richard McKeon, The Basic Works of Aristotle, The Modern Library Classics (New York: Modern Library, 2001).