Tag Archives: Philosophy

On the inadequacy of a crude utilitarian morality

It is not especially uncommon to hear Humanists  espouse a crude form of Utilitarianism: “Maximize happiness; minimize suffering,” and on the surface this seems sensible. We like happiness, in both ourselves and others, and we dislike suffering just as broadly.

There are, however, a number of problems in implementation that render the idea virtually useless. The most important of these are quantification and aggregation.

It is an open question whether emotional states like “happiness” and “suffering” are quantifiable and consequently whether “utility” is.  Certainly no current scientific technique has the ability to unequivocally quantify these things. We can start scanning brains and having people fill out surveys, but at the end of the day, none of that is conclusive. People lie on surveys, and there are large differences in individual brain structure.

At most we could hope to discover (as, in fact, we have) some crude insights about human nature. What’s likely to make us happy, what’s likely to make us suffer, but as valuable as these insights might well be they don’t begin to make individual utility quantifiable much less comparable.

Comparability is a problem specifically to aggregation. How do we weigh one person’s happiness against another person’s suffering? If we can’t do that, how can we aggregate their individual utilities if making one person happier requires that another suffer? Can we rely on that never being the case?

Which brings us to a final problem. Are these goals even compatible? Can we maximize happiness while simultaneously minimizing suffering, or do we have to accept some exchange of happiness against suffering to make this problem tractable? If so, what exchange should we accept?

Or is this whole idea unworkable?

Hume’s Guillotine

David Hume (1711-1776) is a colossus among philosophers, and one of his more important ideas is called Hume’s Guillotine. This is the assertion that a moral claim can’t be derived from a factual claim alone. In other words, you can’t go from an “is” statement to an “ought” statement.

It turns out the reverse is also true, which means that these two types of claim are entirely separate.

This turns out to be important because we have a good way of evaluating the truth of factual claims. Broadly, this method is called science, but because it’s a way of evaluating factual claims, it has nothing to say about purely moral claims.

Now, that’s not to say that moral decisions can’t be informed by science, where those decisions have a factual underpinning, but science has no dominion over the moral. Moral claims are arbitrary.

Take one of the classic moral dilemmas: An actor can save two people by killing one. Is the actor justified in killing the one? Science is silent on this point. It could in principle confirm that the actor can save the two by killing the one, and that inaction would result in the death of the two.

But it can’t tell us which action is right.

Mr. Spock can. The good of the many outweighs the good of the few, right? So the actor should kill the one to save the two.

But wait. If the actor kills the one, the actor is a murderer. If you think murder is bad, then you can’t agree with Mr. Spock.

Hm.

It’s almost like moral dilemmas are hard and there’s no objective way to resolve them.