One of the first things I wanted to use this blog for when I started it up is talking about morality within video games. Firstly, because the study of ethics and morality was my area of expertise, so it was the one place where I felt I had the strongest sense writing something that could be genuinely insightful. Secondly, because morality in video games tends to fall into one of a few very basic mechanical camps that leads to a radical oversimplification of what being “moral/immoral” is.
Examining morality in games is tough, because there are a lot of moving parts. The most obvious is just “what is right and wrong?” Given that this has been a topic of debate for thousands of years, it’s the one that we most often just leave aside and hope we hit upon a good enough answer.
After that is how to make that conception of right and wrong work within the game. On the one hand, you want the player to be able to intuit what actions are supposed to be good and bad, so that they can make choices about how they want to play their character. On the other hand, making that clear distinction renders “bad” choices just stupidly cruel and evil to the point of being cartoonish. Likewise, players often ignore systems that lack consequences, and so you need to reward and punish players for making certain choices, which means either compelling them to be good, or compelling them to avoid neutrality.
But another moving part is the players themselves. Because a core idea about exploring morality in video games is to get players to think about right and wrong and the experience of playing a video game. Sure, you’re just controlling a character, but also you are controlling the character. The character is doing things at your will. Don’t you bear some responsibility for the story’s progression? After all, the game could not continue without your willful input.
This last bit about the relationship between player and game relies upon players engaging with their games in…let’s call it good faith. That is, players accept the nature of the stories they are told and think critically about them, but also do not attempt to weasel out of tough ideas. Since so much of morality relies on a set of facts leading up to a choice, ethics in video games relies a lot on narrative. And so how we engage with narrative is important.
But one facet we tend to not think about is that we as human beings are stellar at rationalization. To rationalize something is to backwards engineer an excuse to explain why we were right to do something that we already did. It is a mental process of jumping through hoops that we create so that we can justify whatever it is we already wished to justify. Rationalization occurs in all sorts of situations, including mundane ones. Did your parents scold you for not doing the dishes like you promised? Maybe you try to defend yourself by claiming that you were busy doing homework at the time. Sure, that might have been true, but you could have done the dishes beforehand, or taken a brief break. You just didn’t. You have created a reasoning in your mind that puts you in the right.
And since we’re so good at rationalization, using games to explore morality becomes tricky when it forces the player to do bad things. The purpose of putting the player into these situations is to make them uncomfortable – to get the player to ask what they’re doing and why. But the actual effect can be to simply give the player an opportunity to rationalize what’s in front of them. I am the player, and I must be right, so I am doing the right thing. There must be some way to show that logically, and I will find that way.
So I wanted to explore the concept of rationalization through a particular example that I’ve talked about before: the ending sequence to The Last of Us. It is a pretty good example of what I’m talking about of a game attempting to get players to think critically about the character they’ve been playing and what they’ve been doing this whole time. And yet, it is by the same token a good example of a significant portion of the player base just rejecting that critical thinking entirely in favor of pure rationalization.
Unsurprisingly, I’m going to need to dig into the details of the ending, so spoilers abound. Consider yourself dutifully warned.
Intent vs. Interpretation
TLOU follows the story of Joel and Ellie as they travel across a post-apocalyptic United States. Society has mostly collapsed due to a zombie outbreak (yeah, they’re not actually zombies in the technical sense but controlled by fungus spores…but they’re zombies). Ellie is immune to infection and thus could be the key to manufacturing a vaccine to protect people. So it’s up to Joel to escort Ellie to a laboratory where researchers can use her to make that vaccine.
Along the way Joel – a man made bitter by the death of his daughter – begins to connect with Ellie, and Ellie does the same. Over the course of the game you see them take on a new father/daughter relationship. And that relationship is important once we get to the ending.
After a series of events, Joel and Ellie arrive at the lab. Joel’s escort mission was merely a job, and so he is sent off on his way, but he obviously doesn’t want to abandon Ellie. And as he is sent away he is told that the only way to effectively study Ellie’s immunity and thus manufacture a vaccine is by studying her brain – which means killing her. Joel, of course, refuses that outcome. It doesn’t matter if he might doom humanity, he won’t let Ellie die for it.
And so the final sequence of the game has you controlling Joel as he runs through a hospital, killing people who are – from the narrative we are provided – the good guys. These are the people who have been secretly fighting an authoritarian military police organization that has focused on shooting people rather than solving problems, and who are actively searching for a way to fight back against the zombie outbreak.
And you kill them all. You shoot a doctor armed merely with a scalpel because he’s in the way. And so Joel rescues Ellie, and then proceeds to lie to her. Joel tells her that actually there are plenty of other immune people, and she wasn’t needed. And when she reveals that being immune and possibly being able to create a vaccine made her feel special, she asks if Joel told her the truth, and he lies to her face again.
Joel’s a bad guy. He’s a sympathetic character because of his attachment to Ellie. He’s had to do plenty of bad things to get by in this new world, and we can understand why he did those things. But he’s not a hero. The story is quite clear on that idea.
And yet, once the game is done, a lot of the reception to the ending has been to try and justify Joel’s actions. Plenty of fans of the game have argued that the group Joel kills is too incompetent to actually manufacture this vaccine. There’s no way they could do it with the salvaged parts left over from the pre-apocalypse. All that would have happened is that Ellie would have been needlessly killed for some silly utopian ideal that would never come to fruition. So really, Joel had to kill them all. It was the only morally correct choice.
This interpretation makes sense in a way. We play as Joel throughout the whole game, and we’re used to taking on the main character’s perspective. We want to see Joel as the hero of the story, and the hero has to do good things – and when he does bad things, it’s for a good reason. And so since we take on Joel’s perspective, and he feels a close connection to Ellie and wants to save her, we too must also have that close connection and want to save her. After all, we as the player have also gone through this whole emotional journey with Ellie through Joel. We can’t just let all of that go to waste.
However, this interpretation (and similar ones) basically short-circuit the actual problem facing the player. Despite the writers literally saying in interviews that the Joel is indeed sacrificing mankind (including details in the sequel that further confirm that fact), players tended to try to find ways to get around this idea. “Sure, but really it wouldn’t have been possible because they had such bad equipment, or because producing a vaccine on that scale is too hard, or because that’s not how a vaccine would work anyway, or because they probably would have just used it to take power, or because they never told Ellie that she would die. So actually, me killing all of them makes me the good guy.”
The problem is that all of this is rationalization. It is coming up with excuses to refuse the basic problem and justify actions that we already want to justify. We don’t want Joel to all but doom humanity, because we’ve learned to like him over the game. We don’t want to be responsible for all but dooming humanity by directing Joel’s actions. We need some kind of “out,” some excuse that makes it all okay. That means that Ellie can be saved and we can all feel closure.
Of course, no part of the game’s story is designed around that closure. In fact, every bit of it leans toward the idea that hard decisions need to be made, and those hard decisions can and may well require being the bad guy. We can sympathize with those decisions and admit that we might well make the same choice in Joel’s shoes. But we’re not meant to fool ourselves into thinking we’re making the morally correct choice in that scenario. We’re not. The game and story want us to sit with that.
Any of these rationalizations completely undercut the thrust of that story. They provide the player with a sense of satisfaction, but at the expense of robbing the final scene of all its gravity. It’s no longer a complex moral choice with serious tradeoffs, but an easy calculation: these guys are incompetent and maybe evil, so killing them is okay. They become no better than any of the other raiders that you shoot throughout the game. What started as a scene about morality turns into another run-of-the-mill shooting gallery.
Building around Rationalization
Now, in saying all of this, the easy conclusion to draw is that players are often idiots who don’t pay attention and don’t want to do the hard work of thinking critically about a game. Sometimes that’s very true. But it is not what we should really take away from this story, or similar stories.
Instead, the key takeaway is that whenever we wish to include morality systems or moral themes into a game, we need to be aware of this facet of human psychology. If we want players to engage with tough moral questions, we need to keep in mind that our brains are often wired to avoid those questions.
Unfortunately, one of the most effective ways to prevent rationalization is to cut off potential avenues of rationalization. Ambiguity is wonderful for giving people room to ponder and discuss possibilities, but it also gives people room to fill in the details as they wish. Could a vaccine have been engineered? Could it have been produced at a significant scale? Were these people actually interested in helping people? The more you leave those questions up in the air, the easier it is for people to arrive at answers that justify the end result they want. If I want Joel to be right, then the answer to all of those questions must obviously be “no.” And the ambiguity surrounding them means I can answer them that way. No tough thinking required.
It may seem like this means never exploring any morally grey areas, but actually what I am saying is that it is necessary to properly define what the grey area is. If we want players to contend with issues about “the lives of the few vs. the lives of the many,” we need to make sure that the situation is set up to be about that issue.
This is where probability rears its ugly head. Sometimes these situations of sacrificing a few people to save humanity rests on a notion of chance – sacrificing these people might prevent the destruction of humanity…but it might not. Maybe everything would work out just fine regardless. And once you introduce “maybe” into the equation, things start going off the rails. If we don’t get resolution on that choice, then we just make up what we want. “Yeah, I didn’t sacrifice those people. It was only a possibility that humanity would be doomed, and I didn’t want to kill those people, so I didn’t. I bet it’s just fine.” The dilemma is robbed of all of its weight because the player has found a way around engaging with it.
That all said, dead certainty can cut the problem in the other direction. Is it worth sacrificing one person to save all of humanity if you know that the sacrifice will work? At that point the question becomes almost absurd. “Of course,” most people would say, “what kind of monster would insist that everyone needs to die just to preserve that one person?” At that point, the dilemma is still robbed of its weight.
And so there is a delicate balance to be struck. And no solid principles can be derived from this problem. Instead, all that can be done is to take each moral dilemma we wish to put into a game and ask an important question: “what do I want the audience to grapple with, and how might they try to wriggle out of it?”
Understanding how people rationalize situations and how to properly cut off those avenues is an important skill. The easier you make it for the player to pretend there isn’t a serious moral issue at hand, the more they will gravitate toward the simple solution. If you want to leave players with a complex question, you need to make sure that the question is properly defined for them. The more room they get to redefine the question – especially when they have an incentive to redefine it (as in the case with justifying Joel) – the more likely they are to take that opportunity.
Concluding Remarks
I love when games try to get players to struggle with tough problems. While I may not be the biggest fan of The Last of Us and its approach to melding (or in some senses, not melding) gameplay and storytelling, I do think its story is still well-done and well-told. The ending sequence is incredibly powerful, and doesn’t try to wrap things up in a neat little bow. It is fine with making the player uncomfortable and leaving them there.
But it also serves as an incredibly valuable case study in how not to engage players with tough moral questions. While the game poses a really gripping challenge to the player, it provides the player with so many opportunities to ignore that challenge and stick with easy solutions that it basically demands backfill – you need to go back and clarify things that players misunderstood. It should not be necessary to point to interviews with the creators or details from a sequel to reach the conclusions I’ve been pointing to. All of this ambiguity should be dealt with within the game itself. And if we choose not to, then we have to accept the fact that the audience is going to spend more of its time trying to rationalize the problem away than it is dealing with the actual question it’s been left with.
Reading this reminded me of Prey (2017). I don’t want to specify why exactly that is because if you (or anyone really) reading this hasn’t played it then it’ll totally undermine some of the more interesting aspects of the game. I guess this is going to be a “if you know, you know” kinda situation.
LikeLiked by 1 person
I don’t know why this posted anonymously, but I wanted to earmark that it was written by me.
LikeLiked by 1 person
Have a like on both just to be safe.
And I think I know what you’re referring to, and if so, then yeah it would also be a prime contender for the same problem. And I loved that game and thought it was really interesting. But it’s hard to get people in general – though not everyone – to wrestle with stuff unless you really force them to.
LikeLiked by 1 person