A while back, I posted my reaction to the emergence of “experimental philosophy,” a hot new area of study at a few trend-setting New England universities, where philosophers test their theories by conducting psychological experiments.
Now, some psychologists are returning the favour, arguing that persistent philosophical dilemmas can be useful road maps to conflict points in our cognitive systems.
In “Finding faults: How moral dilemmas illuminate cognitive structure,” Harvard’s Joshua D. Greene (who’s been featured here recently) and Fiery Cushman argue that “philosophers’ dilemmas provide a reliable guide toward productive cognitive neuroscience by identifying the contours of distinct psychological processes.”
Dilemmas result from conflict between dissociable psychological processes. When two such processes yield different answers to the same question, that question becomes a “dilemma”. No matter which answer you choose, part of you walks away dissatisfied.
In their journal article, Greene and Cushman write that their goal is “to use case studies of moral judgment to illustrate a more general relationship between philosophy and psychology.” They argue that “because philosophical debate erupts at the fault lines between psychological processes, it can reveal the hidden tectonics of the mind.”
Greene and Cushman review two classic moral dilemmas, the “crying baby” and “runaway trolley” scenarios. In both dilemmas, test subjects must decide not only whether but more crucially under what circumstances they would harm one person to protect many. (See the journal article if you aren’t familiar with these tests.)
The relevant upshot is that running these tests in a number of ways, with different variables, seems clearly to indicate that we have two moral systems. Consistent with much of the cognitive psychology featured in this blog, one system is feeling-based, intuitive, and unconscious. This is our primary system for moral judgement. The other system, which involves rational thought, is based on logical and causal calculation. In the authors’ view, when these two moral systems are both evoked by the same situation, they “compete” for dominance. How this competition is resolved determines the moral judgement we favour in a particular case.
The data suggest that cases like the “crying baby” case engage two distinct moral judgment processes. One produces a strong affective response prohibiting harmful actions, perhaps by simulating and responding to the relevant “action plan.” The other appears to rely on the controlled application of a utilitarian decision rule.
However, this resolution is never absolute, and it’s the continuing echoes of the “defeated” moral system that we perceive as a dilemma. And it’s this kind of dilemma with which philosophers struggle, often for centuries.
As the authors note, a conflict between affective and utilitarian approaches to moral judgement continues to be a major topic in moral philosophy. They cite the differences between Kant (affective) and Mill (utilitarian) as one example.
Thus, the authors argue, identifying the moral dilemmas that engage philosophers is one way to identify the conflict points between different cognitive systems, and knowing where these points are can be a fruitful source of topics for further psychological study.
So we have “experimental philosophy,” and now we have what we could call “philosophical psychology.” It bears noting, at least in passing, that Harvard, where Greene and Cushman work, is one of those trend-setting establishment universities at which “x-phi” is emerging. Put a philosopher in the psychology department, and it won’t be long before you’re putting psychologists in the philosophy department. That seems fair.
Of course, there is a larger point here, having to do with the general secularizing trend away from metaphysics, a movement that has been going on in fits and starts for several centuries.
What’s going on here is a fairly subtle undermining of metaphysics. Greene and Cushman’s argument can be construed as a characterization of metaphysics as nothing more than experimental neuroscience with inadequate terminology and methodology. There’s nothing wrong with it, you understand, but it’s really just a linguistic marker for the physical processes of which cognition is really composed.
There’s nothing this direct in “Finding Faults,” and I suspect that the authors would disown the preceding paragraph as misrepresenting their position. Theirs is undoubtedly a more collegial, inclusive stance.
However, that they almost certainly didn’t intend the interpretation I’ve given doesn’t mean that such an interpretation cannot be taken from the material they present.
And it’s easy enough to grant credence to both disciplines, as the authors do, ceding to moral philosophers the task of identifying areas of hard to resolve cognitive conflict, and reserving to psychologists the description of the mechanisms that produce the dilemmas.
As Greene and Cushman point out, this approach would not resolve the moral dilemma.
A philosophical conception of the rational justification of moral action doesn’t do away with how we feel about the conflict. And a neuropsychological depiction of adversary brain states doesn’t eliminate the clash between what we think we should do and what we feel we should do.
Too bad, isn’t it?