The universal need to achieve social stability guarantees
that some system of moral rules will be devised.
– Jesse Prinz
At the end of our series on moral psychology, it’s time to bring together the research and ideas of the writers we’ve featured, and of some others who’ve been in the background, and to try to compose the best available summary of what we know about human morality.
A large part of “best” when dealing with ethics is always also “most useful,” and there is a decided preference here for a view of morality that produces useful insights for practical ethics: How should we be trying to get along in the emerging world society? That topic will be addressed more directly in Part II.
For now, what do we know about the roots of human morality? In evolutionary terms, what moral viewpoint, what ethical strategy, is likely to have the most adaptational value?
The starting point of this discussion is that there are no revealed commandments, and there are no objective moral truths. There is, rather, a complex of culturally-derived moral codes that developed as adaptational strategies from universal cognitive capacities.
Given these core assumptions, all the rest is open to debate. We have one human nature; that nature has been shaped through natural selection; all of our cultures, including our moral cultures, are expressions of our cognitive structures and their adaptations. Precisely how this works is the question at hand.
– . –
Any rational and social species will evolve some sort of pro-cohesion morality, though not the same rules nor even the same categories or criteria as those that possibly could have evolved under different initial conditions or as a result of different random accidents. If this is true, while moralities may differ, and do differ, having a moral strategy for achieving and maintaining community is universal.
One key insight here is that it’s not only specific moral codes that may differ; even the kinds of emotional cognition triggered in the same circumstances may vary among cultures. Studies consistently show that, using Jonathan Haidt’s terms, Western cultures stress “harm” and “fairness” in moral situations, while more traditional cultures stress “community” and “purity.” (For reasons of space, throughout this article I will either refer to the research just this briefly, or omit it entirely. There are many links in the articles in this series, and there are more in this article.)
This insight doesn’t mean that cultural relativism is right in the extreme sense that “culture is everything.” Culture is everywhere, but as Roy F. Baumeister puts it, “culture is humankind’s evolutionary strategy.” Steven Pinker puts this core idea like this:
All this brings us a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life – sex, government, commerce, religion, diet and so on – depends on the culture.
And Jonathan Haidt puts it this way: “Virtues are socially constructed and socially learned, but these processes are highly prepared and constrained by the evolved mind.”
The last of our fundamental assumptions is that morality is entirely a cognitive process; but cognition involves both affective and rational activity. Moreover, while rational thought is, by definition, conscious, the vast majority of our affective life is unconscious. This idea has been expressed in a number of ways by different writers, but the central idea – that morality is based on evolved, instinctive emotions – is the assumed centrepiece of the moral positions I take in this article.
We have affectively valenced intuitive reactions to almost everything, particularly to morally relevant stimuli…. I think the crucial contrast is between two kinds of cognition: intuitions (which are fast and usually affectively laden) and reasoning (which is slow, cool, and less motivating).
– . –
In moral situations, it has become quite clear that when reason operates at all, it does so as a response to innate emotional reactions to stimuli. In other words, morality is primarily an affective response. When reason enters in, it does so as an explainer, a justifier. The primacy of emotion in moral activity is an extremely important insight. Hume was generally right, although he didn’t have the means to show how right he was:
Morality is nothing in the abstract nature of things, but is entirely relative to the sentiment or mental taste of each particular being, in the same manner as the distinctions of sweet and bitter, hot and cold arise from the particular feeling of each sense or organ. Moral perceptions, therefore, ought not to be classed with the operations of the understanding, but with the tastes or sentiments.
Despite the different specific focuses of contemporary moral psychologists, there is broad consensus that emotion, affect, intuition – whatever you care to call it – underlies our moral lives. Study after study, from traditional psychological testing to neurological investigations of both typical and atypical brains, shows that we respond to situations first and foremost emotionally. We respond most strongly to those situations that we later label “moral.” In most cases, our emotional reaction is so rapid, so automatic, so unconscious, that we have no time to think before we respond. Emotions like fear, anger, and – most important, it seems – disgust have been termed the “moral emotions,” and when they are present, we respond more strongly, with greater motivation to act, than we do in other circumstances.
If we see someone buying a coffee, we have little if any affective reaction. If we see her spitting on the sidewalk, we may have a somewhat stronger reaction (depending on our culture). If we see him purposely strike a defenseless old lady, we react very strongly. In the first case, we are not motivated to act. In the second, we may be motivated to act, more or less, depending on the cultural context. In the last case, our motivation to act, to intervene, will likely be very strong.
We have these feelings, these affective reactions, immediately. If we reflect on them at all, we do so afterward – not as a motivation to act, but as an explanation of our initial reaction. Often, we are not conscious of our reaction.
This lack of awareness may help explain why we often feel that morality is “out there” somewhere, some property of the world itself. If we aren’t aware that we’ve already reacted, when we begin to think rationally about our moral response to our emotional reactions we may deny or underplay the central role of our unconscious affects.
This lack of awareness that our own brains are creating our moral sense — “sense” in the same way as our five physical senses — may lead us to look elsewhere for the source of morality: to reason, to culture, to God.
As Nicholas Humphrey puts it in a somewhat different context in Soul Dust:
[There is] a difference between perception (being able to see an object) and sensation (knowing you can see, having the sensation of seeing) … you can have perception without sensation.
The idea that affective intuition is the evolutionary basis of morality is strongly held by contemporary cognitive scientists. One of them, Joshua D. Greene, has written a paper where “he uses neuroscientific evidence to reinterpret Kantian deontological philosophy as a sophisticated posthoc justification of our gut feelings about rights and respect for other individuals.”
We’ll see a lot more of Greene in Part II, next time.