The real roots of morality – part 2

Morality is not just any old topic in psychology but close to our conception of the meaning of life. … So dissecting moral institutions is no small matter.
— Steven Pinker

Part I promised an emphasis on practical ethics, that is, the application of our metaethical understanding to the very human issues of thriving and survival. Part II looks at morality from that perspective: Can we alter our inborn “moral emotions”? Can we “rein them in” when doing so would be more adaptive?

If there is no one moral code, if there may not even be one way of applying morality across cultures, how are we to cope with the emerging modern world? After all, every day our societies become less and less like the small bands and clan groups that were the norm when our current moral adaptations were shaped. Kin selection and mutual reciprocity still operate, of course, but they are not by themselves adequate to deal with our ever larger societies. John Teehan addressed this point at length in his book In the Name of God, reviewed here recently.

Worse, thanks to technologies of various kinds, our circle of societal contact increasingly consists of mixtures of very different and often hostile cultures. How is a world society – not at all the same thing as a world state – to become adaptational, to provide selective advantage, under these conditions?

As society globalizes, some aspects of divergent moral codes become not just ineffective but maladaptive — ever-expanding group size strains the moral unity necessary to community.

Full-blown relativism is inadequate to this task, as Blackburn, among others, has ably demonstrated. Various moral pessimisms, from dialectical materialism to existential withdrawal, don’t work, either: the first allows no individual freedom, while the second has no social component. Both are inadequate to any society that wishes to balance “I” and “we” to mutual benefit.

Some dislike any mention of innate cognitive structures as an attempt to “sneak in” objective morality, but that’s not what I’m after here. I’m not advocating a single, universal moral code, either as an expression of political goals or as a function of Darwinian reductionism.

What I’m after is an expanded awareness that our very survival as a species – our continued selection as adaptationally adequate – depends on understanding where our moral motivations come from, and what they’re for. They’re not for the glory of God, or for the glorification of the Homeland, or for the vindication of my rules over your rules. Our moral motivations are an evolved, natural part of our biology. If they’re “for” anything, it’s community. We are social animals. We have to start with that.

Are we stuck with our instinctive moral reactions? To some extent, yes. But remember the insight offered in Part I, that morality and reason are two different processes of one set of cognitive structures. We can apply reason to emotion, and with enough understanding — of ourselves and of others — we can use reason to moderate the impulses of our moral emotions.

It seems that instinctive, unconscious moral emotions affect us differently in different circumstances and with different ways of  conceptualizing the moral issues involved. This suggests that we may have the capability, to some degree at least, to alter or refine — to reinterpret — our immediate, automatic reactions.

This is a very encouraging possibility. If our affective reactions can be partly moderated situationally by context, which we apprehend rationally, then our moral sense is not locked entirely in stone.

While Jonathan Haidt uses Hume’s analogy of our taste sensors to describe our innate moral categories (harm, fairness, etc.), Joshua Greene instead suggests that morality is like a modern digital camera, with both automatic and manual controls.

Our affective reactions to moral situations are the automatic settings. Just as we can set our camera for built-in configurations like “landscape,” “portrait,” “action,” etc., our moral emotions kick in when faced with the equivalent contexts. We don’t think about moral F-stops, or exposure time, or focus. Our emotions do all that for us. In most situations, the result is transparent — and satisfactory.

But what if when using our fancy camera we want to put a building in the deep background in focus, or to over-expose a sunset? We have to go to manual mode and manipulate all the settings ourselves. We may not get all of these settings just right, but we will control the way the camera “sees” our subject. Greene suggests that this is similar to the process that occurs in moral situations. Most of the time, our automatic, innate emotional reaction is appropriate, and we succeed (or avoid failure, which is the same thing).

In some cases, we can be convinced by rational reflection to alter or reverse one of the elements of our moral code. We don’t ignore our moral affect — we can’t do that. Instead, we redirect the moral emotion we’re feeling to a different “target.” For example, if an initial, primitive sense of “purity” violation over gay marriage can be turned into a sense of “fairness” violation over the denial of marriage rights to gays, our moral position will change.

Greene goes so far as to say, as alluded to in Part I, that the Kantian idea of moral “rights” is basically a way of articulating the post-hoc reasoning we apply to our automatic emotional reactions to moral stimuli:

I think what we do as lay philosophers, making our cases out in the wider world … is we use our manual mode, we use our reasoning, to rationalize and justify our automatic settings. And I think that, actually, this is the fundamental purpose of the concept when we talk about rights … “a fetus has a right to life,” “a woman has a right to choose,” Iran says they’re not going to give up any of their “nuclear rights,” Israel says, “We have a right to defend ourselves” … I think that rights are actually just a cognitive, manual mode front for our automatic settings. This is obviously a controversial claim. …I think the Kantian tradition [which gives primacy to rights], actually is manual mode.

Bad things can happen when we’re faced with situations for which our “automatic” settings were not designed, circumstances that adaptive selection could not have anticipated, because they weren’t around when our brains were developing.

Greene argues that we face just that kind of problem now:

As a result of … technology and cultural interchange, we’ve got problems that our automatic settings, I think, are very unlikely to be able to handle. And I think this is why we need manual mode.

Greene recalls Peter Singer’s “Armani suit” study, in which participants were asked to judge the moral culpability of two situations. In the first, a man wearing an expensive Armani suit refused to wade into a shallow pond to save a child. He was judged to be a “moral monster.” In the second case, a man bought an expensive Armani suit rather than donate the money to a charity that would help starving children on the other side of the world. The second man was judged to be rather morally insensitive, but there was much less intensity in that evaluation than in the first. What was happening, of course, was the operation of action vs. inaction, and the diminishing focus of increasing distance. The more remote the moral imperative, the more passive our personal stake in the situation, the less we feel it, and the less harshly we judge others who don’t feel it strongly.

Greene puts it this way:

Our heartstrings don’t reach all the way to Africa. And so, it just may be a shortcoming of our cognitive design that we feel the pull of people who are in danger right in front of us, at least more than we otherwise would, but not people on the other side of the world.

Our moral emotions were hard-wired when there was no “other side of the world,” when we lived in small groups of family and direct acquaintances. One interesting corollary may be that mass media create a kind of “virtual clan,” such that we respond strongly to earthquakes in Japan or genocide in Darfur not because they are “bad” but because they have been brought close to us, making their victims – at least for a short time – part of our local “in group.” This may also help explain why our attention span for such disasters is so short. Once the next crisis has taken over the media, the former one is no longer local and is quickly forgotten.

It seems that our core problem is that our evolved moral hardware was not intended to handle the challenges of the new world society — the automatic focus controls don’t work with a landscape as broad as the one in which we now live.

Given the importance of the “in-group”/”out-group” distinction, one way of lessening the “culture clash” of the new world society would be to work to enlarge our perception of what constitutes our “in-group.” We can use our rational understanding of the universality of human morality — its forms and motivations, not its code contents — in conjunction with modern media technologies to bring the “outsiders” to the inside. This happens already, of course, in a variety of ways. Cultures spread and values move closer together. This is one reason that we face the challenges we do, but it’s also the best means available for addressing those challenges.

Steven Pinker incorporates several of these concepts in his ideas about how we should approach our modern moral dilemma. Pinker says that first, by interacting with others, we learn to play “non-zero sum games” (which are practiced in some circumstances by some other primates, as well). Second, a “Theory of Mind” consciousness of others as agents similar to ourselves (what Pinker calls “the interchangeability of perspectives”) leads to a universal version of the Golden Rule, which appears in one form or another in moral codes throughout history and around the world.

An application of the Golden Rule can power what Singer called “the expanding circle,” in which we progress morally by including in our list of those others who are “like us” an ever-wider variety of people (and sometimes, other higher animals). From self, to family, to clan — and wider and wider, until our moral sense encompasses every one of us, everywhere.

– . –

So where does that leave us, in practical terms?

In my view, if we are to survive in the emerging world society, if we are to make the best, adaptationally effective choices in our new moral environment, we must:

(1) understand the “automatic” settings of our moral emotions;

(2) accept that our common humanity lies in not in bludgeoning each other until one specific, culture-bound expression of our status as social animals wins out over the others, but in understanding that we are defined as a species by our unique cognition, which encompasses both affect and reason;

(3) use our “manual mode,” our rational brains, to moderate, modify, and where possible overcome our increasingly maladaptive innate moral structures.

John Teehan puts the approach this way:

An evolutionary account of morality leads to a view of morality that is always open to investigation and revision, not because morality is arbitrary but because the social environment in which those values function is dynamic. Morality, to fulfill its function of promoting social cohesion and individual striving, must be responsive to the particularities of its social environment. …

An evolutionary account of morality, while it does deny us the comfort of moral certitude, actually allows us an insight into morality that may open the door to true moral progress.

We’re all in this together, for good or ill. There’s no going back to the “good old days” of small, isolated clan cultures — unless we use our technology either to blow ourselves up or to lay waste to our planet, not solutions most would welcome.

Will we “make it,” as a species? Will we survive long enough to achieve a new adaptation, in which rational recognition of sameness constrains emotional instincts of difference?

I have no idea. But I do know that it’s where we have to go, if we’re going to go anywhere.

——————————————————————–

In addition to this posting, the current morality series includes six other recent articles:

How natural is human morality?
Two views of nativist morality
Jesse Prinz: rejecting moral nativism
The moral irrelevance of indifferent relativism
God in the service of society
The real roots of morality – part 1

Advertisements

3 thoughts on “The real roots of morality – part 2

  1. If morality is learned like language, it should include a time element. In early years rules and examples in the local culture are absorbed relatively unconsciously, but after about age eleven the rules of behaviour come under rational scrutiny. A young child absorbs prejudices without knowing reasons; the older child accepts or rejects these on a more rational basis.

    • You recall that Pinker wrote: “All this brings us a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life – sex, government, commerce, religion, diet and so on – depends on the culture.”

      The rational older child may pass judgement on learned morality, but is everyone capable of the emotional distance necessary to the task?

  2. Let’s say that rational self interest at least should prevail. Everyone should have somewhat of a common interest in saving the planet unless they are in complete denial.

Comments are closed.