As we saw last time, Michael Ruse claims that human morality is a product of natural selection, a successful adaptation, connected to our being social animals.
If our ethical sense is not objective, is it the result of evolution alone, as some strongly-nativist neuropsychologists think, or is it the result of cultural learning alone, as relativists believe? Some of us think that it is a Gordian knot-like mixture of both — a culture-specific expression of evolved, universal mental states — but there are writers who emphasize one position to the exclusion of the other.
Two recent articles highlight the differences in these viewpoints. The first acknowledges the importance of culture while emphasizing natural selection, while the second argues that it is culture alone that shapes our morality. It will come as no surprise to regular readers of this blog that I am more in sympathy with the first view than the second.
In “Supremacy of a Social Network,” published in the New York Times on March 14th, Nicholas Wade reports on recent research into what distinguishes human social groups from the equivalent groupings among other primates:
The two principal traits that underlie the human evolutionary success, according to [Dr. Kim] Hill are the unusual ability of nonrelatives to cooperate — in almost all other species, only closely related individuals will help each other — and social learning, the ability to copy and learn from what others are doing.
The key question is how did such a social context arise:
How did a chimplike society ever give rise to the egalitarian, largely monogamous structure of hunter-gatherer groups?
One researcher believes that the key elements of the adaptation arose separately:
Dr. Chapais sees the transition as a series of accidents, each of which let natural selection exploit new opportunities. Early humans began to walk on two legs because it was a more efficient way of getting around than knuckle-walking, the chimps’ method. But that happened to leave the hands free. Now they could gesture, or make tools.
What did toolmaking add to the social mix? According to Dr. Chapais, the invention of weapons made rival males relatively equal, upping the risk to dominant males who did not learn to cooperate. The result was a more complex, and more familiar, society:
In the incipient hominid society, females became allocated to males more equally. General polygyny became the rule, then general monogamy.
With the rise of the pair bond, human children gained both physiologically and socially:
On the physiological level, having two parents around allowed the infants to be dependent for longer, a requirement for continued brain growth after birth. …
On the social level, the presence of both parents revealed the genealogical structure of the family, which is at least half hidden in chimp societies. A chimp knows who its mother and siblings are, because it grows up with them, but not its father or father’s relatives. … The neighboring males were no longer foes to be killed on sight — they were the in-laws.
According to Michael Tomasello, a new social structure spurred the development of different social behaviors: “I personally am hung up on cooperation as being what really differentiates humans from nonhuman apes.” Dr. Thomassino writes, in his recent book, Why We Co-operate:
Humans were put under some kind of collective pressure to collaborate in their gathering of food — they became obligate collaborators — in a way that their closest primate relatives were not.
If these conjectures are true, then we have the psycho-social environment for learning a moral code; that is, thanks to natural section we have a way of chronicling and regularizing the cooperative relationships among the members of human groups. The specifics of a moral code are not products of evolutionary natural selection — they have to be learned — but the foundation of whatever code we learn is born with us.
In the March/April 2011 issue of Philosophy Now, Jesse Prinz argues that “Morality is a Culturally Conditioned Response.” In his thoroughly relativist view, there is no room for either objective morality (the refuge of believers) or naturally-selected morality (favoured by evolutionists).
Prinz starts with a statement that what’s wrong can be also right, what’s right can be also wrong, and this moral morass is all right with him:
Suppose you have a moral disagreement with someone … In pursuing this debate, you assume that you are correct about the issue and that your conversation partner is mistaken. You conversation partner assumes that you are making the blunder. In other words, you both assume that only one of you can be correct. Relativists reject this assumption. They believe that conflicting moral beliefs can both be true. The stanch socialist and righteous royalist are equally right; they just occupy different moral worldviews.
With coy accuracy, Prinz admits that “Relativism has been widely criticized. It is attacked as being sophomoric, pernicious, and even incoherent.” He counters that efforts to “identify objective values” have failed (as if the subjective-objective debate were the entire scope of the disagreements among moral philosophers).
The core of Prinz’s argument is that there is great and undeniable variation in the moral codes of different societies:
Some groups prohibit attacks on the hut next door, but encourage attacks on the village next door. Some groups encourage parents to commit selective infanticide, to use corporal punishment on children, or force them into physical labor or sexual slavery. Such variation cries out for explanation. If morality were objective, shouldn’t we see greater consensus?
Prinz suggests two primary ways that children acquire moral codes, based on the fundamental insight, confirmed in one way or another by much contemporary brain research, that morality is dependent on emotion:
Moral variation is best explained by assuming that morality, unlike science, is not based on reason or observation. What, then, is morality based on? … Moral education begins from the start, as parents correct these antisocial behaviors, and they usually do so by conditioning children’s emotions. … Children also learn by emotional osmosis. … Consummate imitators, children internalize the feelings expressed by their parents, and, when they are a bit older, their peers.
Prinz cites Jonathan Haidt’s research showing that moral judgements spring from emotions first and are only later, if at all, explained (or rationalized, if you’re less accommodating) by reason.
Prinz rejects moral nativism in all of its forms. His conclusion is that at its base morality is not structural but “sentimental”:
If this picture is right, we have a set of emotionally conditioned basic values, and a capacity for reasoning, which allows us to extend these values to new cases.
Next time, we’ll look at the ideas of Jesse Prinz more closely. And as our short series on moral philosophy continues, we’ll compare the “moral sense” of Prinz with the moral theory of Simon Blackburn.