Words: 2229 Approximate Reading Time: 15-20 minutes
In trying to think about morality in video games, we often approach the problem from a very vague starting point. The problem is that the fundamental questions of ethics are difficult to really dig into without really investigating the underlying systems for understanding what makes things ethical or unethical in the first place.
When I speak of these systems, I refer to the qualities or characteristics of a particular action that determines whether it is right or wrong. It might seem like this is an easy problem to solve: if an action does something good, then it’s a right action, and if it does something bad, it’s a wrong action. But that “good” and “bad” are the very subject of the problem.
There are, broadly speaking, three basic ideas for what qualities we should focus on in deciding what makes actions right or wrong. I’ve mentioned them before, but to reiterate, they are consequentialism, deontology, and virtue ethics.
Why do I bring these up? Because when we start out trying to investigate basic problems of ethics – such as coming up with themes for a narrative – we begin from the standpoint of what we generally believe to be right or wrong, but with little grounding for why we believe those things. We might say that it’s wrong to hurt someone else, but it’s important to explain – not just to others but to ourselves – why it’s wrong. Because that “why” will determine the gray area for when it might be morally justified to hurt someone else. And stories often try to introduce a tension for the reader or viewer or player by making them think about those justifications and where they wish to draw the lines.
Investigating moral choices through the lens of these moral philosophies is important for a few reasons. Firstly, it allows us to carry out our investigation in a more nuanced form. When we present a broad moral choice for a player, that choice can be justified in any number of different ways. The choice doesn’t tell us a whole lot about the player – and more importantly, makes it difficult for the player to understand themselves any better. But by specifying the choice a bit more to align with a particular philosophy, the story can help draw out a distinction that the player can then make more substantive use of.
Secondly, it forces us to think more carefully about the choices we offer and the themes we present. It’s easy enough to say “this is bad, because it’s hurting someone.” But that idea doesn’t take us very far. And when a character’s behavior is presented as clearly good or bad, we don’t really think about the choices offered to us or the themes presented, which means the player becomes rather…braindead.
Thirdly, it helps us to select more interesting examples. Sometimes it can feel necessary to present moral choices in broad forms that can be appealing to as many people as possible. I use “appeal” here to mean that people can agree with or tolerate a character’s actions for a wide variety of reasons, not necessarily that they will like those actions. But the broad appeal means that we have to select a fairly narrow set of examples that have that appeal. However, when we start digging into specific philosophies, we can explore behaviors that are more specific. Not only does that help open up more narrative possibilities, but as already mentioned that specificity doesn’t actually hurt the investigation itself: even those who disagree with the philosophy can still get something from that disagreement.
With those in place, I want to use the next few essays to examine these three philosophies in more detail and provide some ideas for how they can be incorporated into video game choices and narratives. Rather than using broad concepts of “right and wrong,” we can use more specific concepts that make our moral themes much richer.
So to begin, I want to look at the philosophy of consequentialism. My aim here will be to provide some explanation of what consequentialism is and how it works, so that we can better understand how it could be used to frame choices and narrative elements in video games.
Looking at Consequences
So let’s start with understanding what consequentialism itself is.
Most people are probably vaguely familiar with the basic concept. The idea is that you judge actions based – obviously – on the consequences that come out of those actions. We don’t really look at the reasoning behind those actions, except insofar as the reasoning might be useful in creating further good consequences.
Now what is key is that “good consequences” is itself a vague idea. We’re trying to maximize something, create as much of it as we can. But what we’re trying to maximize is important. That’s the key ingredient that goes into consequentialism.
So for example, you might have heard of utilitarianism. It’s probably one of the most famous moral philosophies, and the easiest to grasp. Utilitarianism is a consequentialist moral theory that says that actions are good as long as they promote happiness, and bad as long as they promote unhappiness. In particular, they look at pleasure and pain: when people (and really, sentient beings in general) feel good, that is pleasure and thus happiness; when people feel bad, that is pain and thus unhappiness. Maximize pleasure, minimize pain.
But that’s not the only thing we can maximize. We could make other key qualities freedom, or equality, or self-development. Or any number of other things. It is, of course, important that we choose something that is relevant and worth maximizing. And often we will fall back on happiness in some form, which is why utilitarianism is such a central example in explaining consequentialism. But the key intuition to grasp is that we’re focused on the consequences of our actions and how to make those actions measurable in some way.
So take a very common thought experiment: the Trolley Problem. A trolley is barreling down a track, heading towards five people that have been tied down. The trolley is going to run them over. You are next to a switch that will cause the trolley to switch tracks, but doing so will cause the trolley to run over a single person tied to that other track. So you must make a decision between five people dying or one person dying.
So from a standard utilitarian calculus, you ought to pull the switch. With no additional information, only knowing that the choice is between one life or five lives, it is better to prevent five deaths than to prevent a single death. You are maximizing happiness and minimizing unhappiness in that case.
We can start to add on to this thought experiment. We could propose different scenarios in which the people saved go on to do heroic or diabolical things themselves, causing further pleasure and pain. In all of these different scenarios, the fundamental question hasn’t really changed. We are still focused on maximizing pleasure and minimizing pain. We’re just trying to figure out which action actually accomplishes that.
But in all of this discussion, we don’t really care about why the person pulling the switch pulls it (or doesn’t). If pulling the switch is right, it’s right in all cases, regardless of whether the person does it for a good reason or a bad reason. We just look at the consequences and move from there.
Your Actions Have Consequences!
So what would be some ways that we could incorporate consequentialism into video game choices and use those choices to examine moral problems specifically from that consequentialist viewpoint?
One thing we’d want to start out with is trying to key the player in to the idea of thinking about things from this consequentialist standpoint. Should we be judging things based on their consequences? Does it not matter why a person acts, as long as they do the right thing?
So for example, we can imagine giving the player a quest – or even multiple quests, depending on how things are done – that confronts them with the issue of someone doing something good, but for a bad reason. And we give the player the opportunity to judge that character in some way. Maybe the player is literally asked to pass judgment on them, though this might be rather heavy-handed. Perhaps a better approach would be to allow the player ways to indirectly judge the character.
So perhaps we are asked to overthrow a ruler, who makes good laws, but does so because that will better secure his power. In other words, he does the right thing, but for a bad reason. Should this ruler be overthrown and replaced by someone who might act out of a better will towards the people? In particular, we might wonder about the ability of the more benevolent ruler to make good laws, to actually translate that goodwill into good policy. Is it more important to have good laws, or a kind ruler? Is it more important to have good outcomes, or good reasons for doing things?
Or we could use these scenarios to get players to think about what should be maximized. Is it important to just focus on pleasure and pain, which is what people will probably start out with intuitively? But what if that requires sacrificing something important, like free will? So perhaps a quest could confront a player with someone who is essentially being asked to give up their freedom entirely over to someone else, but in doing so that character will be content because they’ll no longer have to worry about their own decisions. But what do you counsel the character to do? Is it more important that they maximize their pleasure, or should they focus on freedom? But in doing so, you can then confront them on why these things are important. Why should we value pleasure, or freedom, or whatever else we are maximizing? Our goal, ultimately, is to use these choices to get players to think about what they really value at the end of the day.
But even when a player fundamentally disagrees with the idea of consequentialism, we can still use this more specific language to help them think about their own moral theories. The same devices used to get players to consider moral situations from the perspective of judging consequences versus judging something else can be used for this reason as well. So as another example, imagine that there is a basic rule laid down about how people ought to act. Maybe characters in this fictional setting are expected to abstain from stealing in all situations. We might then have a character who is starving and cannot afford food, and so their only option is to steal it. But doing so, of course, requires breaking that fundamental rule, and thus they don’t want to do it. So how do we counsel this person? Do we hold the rule to be most important even in the face of life, or do we break the rule to create the consequence of a person not starving? These situations can be used both to criticize consequentialism or to promote consequentialism, merely by how decisions are framed. So we can adopt multiple approaches to the same basic issue using this same toolkit.
The ultimate value of these more narrow examples is that we can create moral choices for players that are geared less towards generalized concepts that lack any fundamental basis. While it may seem more difficult, there is a great deal of value in digging into the more specific concepts. We could use a game’s narrative and choices to investigate a very particular philosophy – we might point to Bioshock and it’s attempt to engage in a pointed criticism of Objectivism as an example – but we don’t really need to go that far. We can still carry out this process from a more general perspective.
Given that moral philosophy is a complicated subject, it might seem to be too much to learn anything, and instead easier to throw our hands up and just tackle moral questions however we see fit. But in doing so, we severely limit ourselves. The language we use and the concepts we identify will be much less refined, and our ideas will exist on a shakier foundation.
And when it comes to translating all of that language and those concepts and those ideas into a video game, we are limited by that same attitude. It might seem easier to investigate moral topics if we just stick to broad ideas, but we are then competing with everyone else who has taken that same approach. Which means we are fighting for a somewhat limited space for moral investigation. It is only by digging deeper that we really open up new opportunities.
I have explored the idea of consequentialism here with some basic ideas for how it could be implemented in storytelling to make for richer ideas. In the following two essays, I’ll apply this same process to two more philosophical approaches: deontology and virtue ethics.
 For those not in the know, this isn’t really what the Trolley Problem is about. It’s actually supposed to examine issues of moral obligations and the distinction – or lack thereof – between action and inaction. But we’ll ignore that component of the problem. We’ve got enough on our plate.