Forward Thinking: How Should We Punish People for Moral Failures?

One of the ubiquitous questions asked of atheists by theists is this: if there is no divine Judge meting out punishment, why would anyone behave in a moral way?

Like pretty much all other non-theists, I have answered this question by asserting that moral behavior does not depend on punishment. People, I have asserted, act morally because we are intrinsically motivated to act morally. I supported this statement with scientific studies that show evidence of moral judgment in even pre-verbal children, as well as sociological evidence from non-theistic societies. 

When I first read this challenge question, my inclination was to spin a version of that answer in response. Better to reason with people, I was going to write. Better to appeal to their innate morality. The aim should be to help them become better people.

However, the other day, I discovered that, well, cough, cough… my answer above is not entirely correct. And this discovery happened JUST in time for me to make this confession in public, in answer to this blog post challenge. Aren’t I the lucky girl??

It turns out that, despite the fact that we ARE intrinsically motivated to act morally, punishment serves a very important purpose. It turns out, in fact, that without punishment, we would very likely still be wandering about in small, genetically-related bands and fighting (with rocks and stout branches) when we encountered the small, genetically-dissimilar-to-us band in the next valley. (Chimps — our nearest relatives — have a social structure like that.)

Humans, however, have managed to overcome this tendency to band together exclusively with kin, at least well enough to send help-desk jobs from the US to India. How did it happen? One well-regarded scientific model suggests that we did it by creating and harshly enforcing rules, in a process called “altruistic punishment”. Altruistic punishment is exactly what this challenge question asks about: it is punishment meted out, not to stop harm, nor out of personal revenge, but simply to enforce rules. Often, the person who metes out the punishment actually pays a price for doing so. Consider, for example, a labor strike. Workers band together to punish the unfair practices of their employer, even though they will not be paid for the time they are on strike. A moment’s thought will provide many other examples of this behavior, some ugly (think of Westboro Baptist), some nearly saintly (think of hunger strikes to protest bad treatment of prisoners.)

Here’s the science, beautifully explained on this blog post:

[According to] Ernst Fehr and Simon Gachter in a landmark paper published in 2002… Altruistic punishment, Fehr and Gachter reasoned, might just be the spark that makes groups of unrelated strangers co-operate. To test this they created a co-operation game played by constantly shifting groups of volunteers, who never meet – they played the game from a computer in a private booth. The volunteers played for real money, which they knew they would take away at the end of the experiment. On each round of the game each player received 20 credits, and could choose to contribute up to this amount to a group project. After everyone had chipped in (or not), everybody (regardless of investment) got 40% of the collective pot.

Under the rules of the game, the best collective outcome would be if everyone put in all their credits, and then each player would get back more than they put in. But the best outcome for each individual was to free ride – to keep their original 20 credits, and also get the 40% of what everybody else put in. Of course, if everybody did this then that would be 40% of nothing.

In this scenario what happened looked like a textbook case of the kind of social collapse the free rider problem warns of. On each successive turn of the game, the average amount contributed by players went down and down. Everybody realised that they could get the benefit of the collective pot without the cost of contributing. Even those who started out contributing a large proportion of their credits soon found out that not everybody else was doing the same. And once you see this it’s easy to stop chipping in yourself – nobody wants to be the sucker.

Rage against the machine

A simple addition to the rules reversed this collapse of co-operation, and that was the introduction of altruistic punishment. Fehr and Gachter allowed players to fine other players credits, at a cost to themselves. This is true altruistic punishment because the groups change after each round, and the players are anonymous. There may have been no direct benefit to fining other players, but players fined often and they fined hard – and, as you’d expect, they chose to fine other players who hadn’t chipped in on that round. The effect on cooperation was electric. With altruistic punishment, the average amount each player contributed rose and rose, instead of declining. The fine system allowed cooperation between groups of strangers who wouldn’t meet again, overcoming the challenge of the free rider problem.

When I read that, whole regions of my worldview exploded. (For one thing, I owe a lot more respect to the theists who worry about what happens to morality in the absence of punishment.) A bit of web searching convinced me that this experiment was not only valid, but it made sense. For instance, it turns out that altruistic punishment 

activated the dorsal striatum, which has been implicated in the processing of rewards that accrue as a result of goal-directed actions. Moreover, subjects with stronger activations in the dorsal striatum were willing to incur greater costs in order to punish. Our findings support the hypothesis that people derive satisfaction from punishing norm violations and that the activation in the dorsal striatum reflects the anticipated satisfaction from punishing defectors.

(The fact that there is a neurological underpinning convinces me that this behavior is part of what makes us human.)

So, it seems that innate morality, in the absence of punishment, is insufficient to gain us the fruits of non-zero-sum interaction between bare acquaintances. For that, we need rules and punishments for those who transgress the rules.

So, then, how SHOULD we punish people for moral failures? Should we, for instance, punish our friends even at the risk of losing their friendship? That would seem to be a form of altruistic punishment, aka, social glue.

For myself, when I contemplate a question like this I often think in terms of stance. That is, what stance — posture, facial expression, tensed and relaxed muscles, etc. — do I associate with the act of punishing my friends? Do I want to assume that stance?

For me, punishing someone implies looming over him or her, with or without angry expression, clenched fists, etc. And no, that’s not a stance I want to take with friends… or with anyone, really. I prefer to approach every person with a relaxed, open stance that conveys interest and goodwill. 

So I think my approach still has to be to reason with my friends about moral failings. (This assumes that no one is being hurt at the moment of confrontation — but even then, my concern in such a case has to be helping the victim rather than punishing the perpetrator.) I find that my preferred actions are not different than they were before I learned this scientific information.

Perhaps I am thereby shirking my duty to enforce pro-social norms. But I suspect that the optimum number of punishers within a population will not be 100%. 

About all I can do is continue to think about this new information… oh, and be a bit more sympathetic to people who express a desire to punish people for their transgressions!