Statistical science and the problem of causality

In a recent article posted by Wired, Jonah Lehrer argues that reductionism as a research methodology has a serious, often fatal weakness — knowing how all of the parts of a system work isn’t the same thing as understanding how the system itself operates, or how it interacts with other complex systems.

In “Trials and Errors, Why Science Is Failing Us,” Lehrer uses failed tests of promising pharmaceuticals as examples of how knowledge of the elements of a process doesn’t lead automatically to an understanding of how those elements interact.

In one study, a new cholesterol drug was developed by applying our detailed knowledge of the chemical processes that produce cholesterol. The new compound efficiently reduced LDL (“bad” cholesterol) and increased HDL (“good cholesterol). This should have led to a reduction in heart disease among the test subjects. Instead, the rate of heart disease increased. Something in the way the new drug interacted with other complex chemical processes in the body was causing negative outcomes that were not predictable solely from a knowledge of the chain of molecular reactions that describe cholesterol production.

Lehrer writes that “we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. … By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients.”

The problem with this approach, Lehrer argues, is that “causes are a strange kind of knowledge.”

This was first pointed out by David Hume, the 18th-century Scottish philosopher. Hume realized that, although people talk about causes as if they are real facts—tangible things that can be discovered—they’re actually not at all factual. Instead, Hume said, every cause is just a slippery story, a catchy conjecture, a “lively conception produced by habit.”

In short, causes don’t exist in nature; they are narratives we invent to explain nature to ourselves: “We look at X and then at Y, and invent a story about what happened in between. We can measure facts, but a cause is not a fact—it’s a fiction that helps us make sense of facts.”

Calling this contrast “a  fundamental mismatch between how the world works and how we think about the world,” Lehrer suggests that this natural mental strategy of narrative-building is usually an effective way of explaining the world to ourselves. But not always. He writes that our stories about causation easily can “go from being slickly efficient to outright misleading.”

Lehrer argues that this narrative of causation can be an attractive error when scientists apply statistical analyses of the kinds typically employed in testing new drug therapies. A correlation isn’t a cause, but it’s often hard not to be influenced by our mental bias toward seeing causes. When a correlation is found, the “natural” step is to assume that there’s a causal relationship behind the statistical relationship.

Lehrer writes that “the reliance on correlations has entered an age of diminishing returns.” Not only have all of the “obvious” causes been found, but — more important — “searching for correlations is a terrible way of dealing with the primary subject of much modern research: those complex networks at the center of life.”

The reductionist approach, which lies at the heart of the scientific method, has been the great engine of our factual knowledge about the world. This has led us to assume that we can apply reductionist methodologies to all scientific problems. However, Lehrer warns, “Even when a system is dissected into its basic parts, those parts are still influenced by a whirligig of forces we can’t understand or haven’t considered or don’t think matter.”

Lehrer concludes by restating his core idea that “a cause is not a fact, and it never will be.” For this reason, he writes, “the things we can see will always be bracketed by what we cannot. And this is why, even when we know everything about everything, we’ll still be telling stories about why it happened. It’s mystery all the way down.”

Now, while I agree — how could I not? — that there are limitations to what we can know, and even more significant limits to what we can understand about what we know, I’m not sure where, in anywhere, Lehrer wants us to go with his insight.

If his article were nothing more than a cautionary tale about the limits of current scientific methodologies, I couldn’t find anything to dispute in his simplified summary of the difference between fact and interpretation. A complete description of the individual interactions of physical parts isn’t the same thing as an explanation of all the ways that those single interactions combine into complex systems.

Fluid dynamics, chaos theory, the butterfly effect — call it what you like,  but just about everyone agrees that very complex interactions are beyond our ability to predict, much less control.

And having a complete parts list for a space shuttle tells you nothing direct about the theory of gravity, much less the purposes of space travel. For a silly but perhaps clarifying example, could you predict “American Idol” from a complete inventory of the parts of a television camera?

No problem, so far. However, if the end of  his piece  implicitly claims that there are things we’ll never know because they are in some way inherently unknowable, “mystery all the way down,” and not just unknown — incapable of understanding, and not just  unexplained — then I would have some serious problems with his conclusion.

I don’t see anything else in “Trials and Errors” that suggests such a metaphysical claim, so I’ll just have to attribute any slight  unease to my over-sensitivity about stealth introductions of the supernatural.

Advertisements

One thought on “Statistical science and the problem of causality

  1. Isn’t there also a tendency from this to say reductionism isn’t explaining complex systems therefore complex systems are automatically more than the sum of their parts therefore there is a God potential?

Comments are closed.