Imagine with me for a moment a steep cliff. At the base of the cliff is a campsite. Currently, there are two tents set up. One is inhabited by a nun, the other by a pregnant woman who also brought an infant with her.

That night, a strong wind blows. The tents are shielded by the cliff face, but a boulder at the stop is shaken loose and falls over the edge of the cliff.

It lands on the tent inhabited by the pregnant woman, killing her, the infant, and the unborn child.

It a gruesome scenario, but I have a question for you.

Whose fault is it?

Now, maybe you’d blame the women who set up the tent, but let’s say she didn’t know any better. In this case, we’d probably call it “an act of God,” meaning that it just sort of happened and there’s not anyone to blame for it.

Let’s change the scenario a little bit.

You’re now standing at the top of that cliff face. You see the boulder start to wiggle in the wind and the scenario I just described flashes through your head.

What do you do?

For the sake of simplicity, I’m going to say that you can’t stop the boulder from rolling down the cliff, nor can you warn the people at the bottom that a boulder is falling. The only thing you can do is push on one side of the boulder or the other so that it falls on the nun’s tent or the pregnant woman’s tent. Choosing to push it either direction will result in everyone in that tent dying.

If you’re the utilitarian type, you’d probably push the boulder so that it falls on the nun. Since “[u]tilitarians believe that the purpose of morality is to make life better by increasing the amount of good things (such as pleasure and happiness) in the world and decreasing the amount of bad things (such as pain and unhappiness),” it would make a lot of sense to push the boulder so that it falls on only one person instead of three.

I think there are a lot of problems with the utilitarian approach. There’s one, in particular, I’d like to examine in detail, and that’s that you can almost always create a scenario in which the other scenario is the correct one.

For instance, I could tell you that nun was a nun by day, cancer researcher by night, and had she survived, she would have cured that disease, extending the lives of millions of people. Then, it seems like a small loss to kill just three.

Or, I could tell you that in addition to what I just told you, the unborn child would end world hunger, bring about a peaceful resolution to the Palestine-Israel conflict, and would develop faster-than-light travel. Maybe then you’d decide to kill the nun.

This seems to be a fundamental problem with utilitarianism. You wouldn’t even need to make changes this dramatic, but even subtle changes in the scenario would radically change what the “moral” action would be.

But, in real life, you don’t know what the future holds for these people. Since you have to make decisions blind to what happens in the future, relying on outcome is, well, unreliable. I think, then, that it is important to have a personal moral code that isn’t dependent on outcome.

Now, back to the boulder.

If you alter the boulder from its original course, you murdered someone. You might not feel that way. You might rationalize that you did it to save a life (or three). But, had you not altered the boulder from its original course, those people would be alive. The fact that they are not means that your actions directly lead to their deaths.

What’s the moral course of action, then?

You can’t consider the future of the people below. You can’t know their worth relative to each other. It seems to me that the moral course of action is to try and stop the boulder even if you think you won’t be able to but only in such a way that will allow it to proceed along its original course if you fail. Which you will because that’s how I set up the scenario. That’s the only way you avoid being liable is by not altering the course.

If you disagree, my question is this: why do you get to decide who lives and who dies.

Wait, wasn’t this article about driving cars?

Yeah, it was. Is.

There’s a Michigan Institute of Technology website called Moral Machine, in which you can take a quick quiz regarding who should die when a driving car has to decide. There are different groups of pedestrians that you must decide between, as well as scenarios in which not killing pedestrians will kill the car’s occupants.

But, it has the same problems that boulder scenario possessed. If you choose to alter who the car hits, you’re morally liable for their deaths. If the results of this poll were translated to the real world and a driving car killed someone, the programmers would be morally responsible.

Maybe you’d argue that by crowdsourcing the problem, as MIT has done, that you’re getting a better read on what people find morally acceptable. But, that’s all you’ve done.

You haven’t found what is right.

With this in mind, it’s hard to read the Moral Machine experiments as anything other than an exercise in moral depravity.

Thankfully, the moral scenarios they explore, and that I had you explore through the boulder problem are exceedingly rare in real life. If you’re in a scenario where someone must die, something has already gone wrong in the past. Our efforts as a society are better spent developing technology that will prevent these sorts of A/B scenarios from happening in the first place, instead of building up a tolerance for who gets to live and who must die.

If we start choosing who lives and who dies, it’s no longer an accident. It’s murder.