Wednesday, July 30, 2008

SciAm - Between a Rock and a Hard Place: Thinking about Morality

A cool article from a few days back in Scientific American Mind on morality. The research seems to suggest slightly different mechanisms involved in utilitarian judgments as opposed to deontological judgments.

Between a Rock and a Hard Place: Thinking about Morality

When we are in a pinch, surprising factors can affect our moral judgments

By Adina Roskies and Walter Sinnott-Armstrong

Cognitive science and moral philosophy might seem like strange bedfellows, but in the past decade they have become partners. In a recent issue of Cognition, the Harvard University psychologist Joshua Greene and colleagues extend this trend. Their experiment utilizes conventional behavioral methods, but it was designed to test a hypothesis stemming from previous fMRI investigations into the neural bases of moral judgments (see here and here).

In their study Greene et al. give subjects difficult moral dilemmas in which one alternative leads to better consequences (such as more lives saved) but also violates an intuitive moral restriction (it requires a person to directly or intentionally cause harm to someone else). For example, in the “crying baby” dilemma subjects must judge whether it is wrong to smother their own baby in order to save a large group of people that includes the baby. In this scenario, which was also used by the television show M.A.S.H., enemy soldiers will hear the baby cry unless it is smothered. Sixty percent of people choose to smother the baby in order to save more lives. A judgment that it is appropriate to save the most lives, even if it requires you to suffocate a child, is labeled “utilitarian” by Greene et al., whereas a judgment that it is not appropriate is called “deontological.” These names pay homage to traditional moral philosophies.

Emotion vs. Rationality

Based on previous fMRI studies, Greene proposes a dual-process model of moral judgments. This model makes two central claims. First, when subjects form deontological judgments, emotional processes are said to override controlled cognitive processes. In other words, the subjects who are unwilling to smother the baby are being swayed by their emotions, and they can’t bear the idea of hurting a helpless child. This claim has been supported by a flurry of recent behavioral studies and neural studies. Greene’s dual-process model also claims that controlled cognitive processes cause utilitarian moral judgments. The new Cognition study puts that second claim to the test.

Neuroimaging reveals only correlations; it cannot determine whether a certain brain area is causing a particular judgment. But intervening in a process can provide evidence of causation. In the Cognition study, Greene et al. attempted to interfere with moral reasoning by increasing the cognitive load on subjects. They had subjects perform the moral judgment task at the same time as a monitoring task, in which subjects viewed a stream of numerals and responded to occurrences of “5.” If this added cognitive load interferes with the controlled cognitive processes that cause utilitarian judgments, the researchers surmised, then subjects should make fewer utilitarian judgments and should form these judgments more slowly. (For more on factors that influence judgment speed, see here.)

As hypothesized, added cognitive load led to longer reaction times for utilitarian judgments, but the researchers found no effect on reaction times for deontological judgments. Although it took subjects longer to approve of acts like smothering a baby when also looking for the number 5, it did not take them longer to approve of acts like not smothering the baby. This differential effect suggests that some of the cognitive processes involved in the monitoring task are also needed for the processes that lead to utilitarian judgments but not for those that lead to deontological judgments.

The cognitive load did not, however, decrease the proportion of utilitarian judgments, as the dual process model predicts. People were just as likely to approve of smothering the baby, even if it took them a little bit longer to make that judgment. This is puzzling, and suggests that the two processes do not compete. Greene et al. try to explain away this counterevidence by speculating that subjects “were determined to push through” the cognitive load, but this story makes sense only if subjects knew in advance that they wanted to reach a utilitarian judgment.

Read the reast of the article.

No comments: