Consequentialist moral theories, like utilitarianism, are quick and easy to use, like using fingers in the air to gauge the direction of the breeze, but they are not without fault. In this post, I hope to demonstrate the issues with consequentialist moral theories.
What is Consequentialism?
Consequentialism, as its name suggests, is simply the view that normative properties depend only on consequences.https://plato.stanford.edu/entries/consequentialism/
Simply put, the consequences of an action decide if it is moral (good), immoral (bad), or amoral (neither good nor bad).
This can be applied through a number of normative ethical theories with a particular ‘goal’. From here we can derive statements about what best represents it or compare the consequences to see if they represent this goal.
This article could likely be applied to any consequentialist moral theory but for the purpose of the conversation, we shall address utilitarianism (specifically act utilitarianism).
What is Utilitarianism?
Classic utilitarians held hedonistic act consequentialism. Act consequentialism is the claim that an act is morally right if and only if that act maximizes the good, that is, if and only if the total amount of good for all minus the total amount of bad for all is greater than this net amount for any incompatible act available to the agent on that occasion.https://plato.stanford.edu/entries/consequentialism/#ClaUti
Utilitarianism is a consequentialist moral theory with the purpose of maximising pleasure and wellbeing and reducing pain and suffering.
The simplest way to explain it, if I have 5 sweets and 5 people that like sweets, I would maximise the pleasure by giving one sweet to each person. If I gave all 5 sweets to one person they might be much happier but I would cause sadness and a feeling of rejection in the 4 I excluded. When explained this way it’s really easy to see how this moral theory works to bring about a good outcome, so what are the problems?
The Problems with Consequentialism
There are a number of issues with consequentialist moral theories, and we shall examine a few of them.
- Layers of Consequences
- Knowledge of the situation
- Measuring happiness (or the particular goal)
Intent, doing something with purpose, is generally ignored from a consequentialist moral theory. What you intend to do does not matter, only the results of the actions do. Therefore, you could intend to do something bad and do something moral or intend to do something good and do something immoral.
Intent to do Good Gone Wrong
Let’s say someone is starving and you give them your sandwich. They have an allergic reaction and die. Neither of you were aware they had an allergy, and your intent was pure, however, the consequence was bad. If only the consequences of one’s actions matter then your intention is irrelevant. You committed an immoral act.
Intent to do Bad gone… er.. Right?
Let’s say someone went crazy (with anger) and was going to shoot his wife. He pulled out his gone and fired at her, and missed. The bullet travelled through a shop window and incapacitated a robber, saving the lives of 5 people inside. If only the consequences of one’s actions matter then your intention is irrelevant. You committed a moral act.
Intuition on Intent
There is something that feels off about this, at least for me, and I think many folks would add in a caveat that they feel intent is of import. They might combine intent with a consequence to judge morality, but that isn’t exactly consequentialism, though I do think it is a necessary consideration.
Layers of Consequences
Most consequentialist moral theories, or at least those that apply them, tend to only look at first-order consequences, or at least are generally only applied that way.
Classic utilitarianism seems to require that agents calculate all consequences of each act for every person for all time. That’s impossible.https://plato.stanford.edu/entries/consequentialism/#WhiConActVsExpCon
Some will argue that you have to take all consequences into consideration, however, this is a near-impossible task. Others might state that we should do the best we can with the information we have, which I agree with but it seems to negate the nature of consequentialism. The point is we are judging the consequences, not having the correct information is actually irrelevant to that.
Layers of Consequences in the ‘Trolley Problem’
Ah, the classic trolley problem! I do love this simple thought experiment and all the modifications that come into play. I do also find it funny how many folks try not to actually give an answer to it too.
The trolley problem is a series of thought experiments in ethics and psychology, involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. Opinions on the ethics of each scenario turn out to be sensitive to details of the story that may seem immaterial to the abstract dilemma.https://en.wikipedia.org/wiki/Trolley_problem
First, let us consider the basic trolley problem. There are 5 people tied to one track and one person tied to another. A trolley is hurtling towards the 5 and you have the ability to pull a lever, changing the tracks, moving the trolly’s direction away from the 5 but causing the death of the 1. There is no time to do anything else, you have no information about these people, your choices are only:
1. Pull the lever (Thus saving 5 and killing 1)
2. Do nothing (Allowing 5 to die)
Under utilitarianism this is a simple answer, we maximise pleasure and reduce suffering, saving the 5 people at the cost of the 1.
However, this only works if we are only considering first-order consequences. If we then take anything past that we could have an issue.
So, let’s assume like a good utilitarian we have pulled the lever, saving the 5 people at the cost of the 1. How could this be a bad thing when we judge more than first order consequences?
- The 5 people we saved are all rapists and murders that go on to rape and kill 5 people each.
- The person we killed was just about to cure cancer and that knowledge is lost forever, potentially killing millions.
- The person we killed had tonnes of people that addored them causing massive heartache in their death but the 5 had no one that cared about them.
- The 5 we saved were there by choice and wanted to die, whereas the 1 wanted to live.
This is only taking it to the 2nd order of consequences but consider how this could spiral out of control if we continued. If we took it to 3rd order, perhaps the 5 people that raped and murder 5 people each, actually raped and murdered were all rapists and murders, thus actually causing a better effect overall.
Now it could be argued that your voluntary act is then superseded by the rapist’s voluntary act, which is why we only look at first-order consequences, at least with this sort of thing. That is to say, we are only responsible for the direct consequences of our actions, and when an agent makes a voluntary act off the back of your actions, the ‘domino’ starts again. Although it can be argued, had you not done your act they wouldn’t have been able to do theirs.
What about examples where your actions inspire people to voluntary do negative actions? Has the domino stopped there? So, how do we judge when the dominos reset? What if we do something immoral as the consequence of someone else’s actions? Are they responsible for our action? Are we both responsible?
Knowledge of the Situation
Whilst the consequences are what matter, if we don’t have enough information it can be hard to predict what brings around even the best first-order consequences, let alone subsequent orders of consequence (as demonstrated above).
This is one reason why a consequentialist moral theory might be a good ‘finger in the air’ but there is more to think about than just the consequences.
Measuring happiness/pleasure is fairly difficult in day to day life. We can’t exactly scan everyone’s brains and hormone levels to see how much pleasure they got from the situation, so we can’t be sure we are maximising pleasure.
Consider the example at the beginning of the article. I have 5 sweets, and 5 people that like sweets. It seems like a simple equation to maximise everyone’s pleasure I give them one sweet each.
But what if 4 of the people that like sweets don’t mind if they don’t get one and one of them feels a euphoric pleasure from sweets? In this instance, maximising the pleasure would actually be giving all 5 sweets to that person.
Like with the trolley problem, what if the 5 you saved wanted to die and the 1 you killed wanted to live. Again, you haven’t maximised pleasure.
So, not only do we have the issue where we can’t predict someone’s pleasure but in day to day life we can’t actually measure the result of someone’s pleasure. The best we can do is guess.
Consequentialism on its own is alright as a ‘finger in the air’ but actually requires a little more thought, and ought to take into account the intent and subsequent consequences of the actions. Even then, there is no real way to measure folks pleasure in day to day situations, so we cannot accurately infer if we have maximised pleasure.
This can also be applied to other consequentialist moral theories, though with something like preference satisfaction utilitarianism it is more first-order preferences we judge by so that part is simpler, though it is still hard to judge what someone else’s preferences are, and preferences are prone to change. Welfarism is more of an economic nature in general though could be applied in a similar way to wellbeing.
So yes, whilst a consequentialist might argue that they DO take into account things like intent, which I think they ought, and they might caveat the knowledge of the situations as do the best with what you know/find out, they are actually moving away from a purely consequentialist moral theory.
Thus, I feel we have shown the problem with consequentialist moral theories, in the fact that it is not enough to just judge consequences and at times it can be impossible to get a read of said consequences.