It is not especially uncommon to hear Humanists espouse a crude form of Utilitarianism: “Maximize happiness; minimize suffering,” and on the surface this seems sensible. We like happiness, in both ourselves and others, and we dislike suffering just as broadly.
There are, however, a number of problems in implementation that render the idea virtually useless. The most important of these are quantification and aggregation.
It is an open question whether emotional states like “happiness” and “suffering” are quantifiable and consequently whether “utility” is. Certainly no current scientific technique has the ability to unequivocally quantify these things. We can start scanning brains and having people fill out surveys, but at the end of the day, none of that is conclusive. People lie on surveys, and there are large differences in individual brain structure.
At most we could hope to discover (as, in fact, we have) some crude insights about human nature. What’s likely to make us happy, what’s likely to make us suffer, but as valuable as these insights might well be they don’t begin to make individual utility quantifiable much less comparable.
Comparability is a problem specifically to aggregation. How do we weigh one person’s happiness against another person’s suffering? If we can’t do that, how can we aggregate their individual utilities if making one person happier requires that another suffer? Can we rely on that never being the case?
Which brings us to a final problem. Are these goals even compatible? Can we maximize happiness while simultaneously minimizing suffering, or do we have to accept some exchange of happiness against suffering to make this problem tractable? If so, what exchange should we accept?
Or is this whole idea unworkable?