Words: 5708 Approximate Reading Time: 35-45 minutes
My own area of expertise lies in moral/ethical philosophy. Broadly speaking, this is the study of how people ought to behave or act, sometimes in particular situations, sometimes in terms of the habits or dispositions they cultivate, sometimes as general rules. It’s a complex topic, as most topics are once you get into the finer details.
If you’ve played a fair number of video games, especially role-playing games, then you’re probably familiar with the concept of moral choice systems. The game in some way gives you options, one or more of which are “bad” or “evil,” and one or more of which are “good,” and perhaps with a neutral option in the middle. There’s been plenty of criticism of these systems, but I thought it might be useful to examine why these systems are, in some sense, set up to always be inadequate.
The beginning problem is that it’s hard to even talk about a subject matter without missing some level of detail. Even when you’re an expert on the subject. And most game developers didn’t spend years studying moral philosophy before getting into making games (because, obviously, why would they?). So now imagine taking that problem and compounding it by trying to jump through a bunch of other hoops at the same time.
So making a good moral choice system is tough. In fact, “tough” is a very light term to describe it.
Types of Moral Choice Systems
I want to begin by explaining a few different options for conveying moral choices, and provide some explanation about why each system might be appealing from a developmental standpoint. This is not meant to be an exhaustive list of all possible moral choice systems that have or could exist, but a broad overview of the most common systems.
- The Binary Perfection System – An easy way for games to incorporate moral choices is to give you a set of options. Let’s simplify it by saying you get three choices: good, evil, and neutral. You choose your option depending on your own playstyle (role-playing, personal morality, curiosity/completion, etc.). The game then gives you “good points” or “evil points” depending on your choice (or no points, if you chose the neutral option). Usually there is a single bar that measures your morality, and you’re incentivized to max out your points in one direction or the other. You can find examples of this system in many of the BioWare RPGs, such as KOTOR and Mass Effect, as well as in certain open world games like Infamous.
- This system makes a lot of intuitive sense. It’s relatively easy to make. It’s easy for players to understand. It gives players a reason to care about their choices, while usually providing both immediate feedback (i.e. the players know which choice was “good” and which was “evil”) and long-term feedback (i.e. the story can take into account previous choices). From a writing perspective it also makes branching much easier. The fewer choices available, the fewer branches that players can create from their choices, which means you can create narrative experiences that feel different (in big or small ways) without having to control for thousands of tiny decisions.
- The Reputation System – Another way to accomplish the above goal without falling into the good/evil dichotomy is to replace “morality” bar with reputation. When the player performs good actions, they get a good reputation, which often comes with a handful of rewards. When the player performs bad actions, they get a bad reputation, which often comes with penalties. Often a developer will try to present these choices through some lens to make both appealing in some way: the bad choice leads to bigger immediate rewards with long-term drawbacks, while the good choice gets smaller rewards but long-term benefits. Another common factor is that a sufficiently bad reputation will lead to the character being hounded by some kind of enemy until they improve their reputation, which can be either a fun way to add danger to a playthrough or a major headache, depending on how it works and what the player wants to accomplish. Examples include games based on Dungeons & Dragons rulesets such as Baldur’s Gate, Karma in Fallout 1–3, and Red Dead Redemption 2.
- Reputations systems help further condense storywriting problems. It’s still necessary to make changes, which is certainly plenty of work, but reputation systems don’t have to play as significant a role in altering the broader narrative as playing a good/evil character might. The system also allows developers to avoid the simplistic (or sometimes confusing) narratives that the good/evil system can push characters towards. It also provides slightly stronger incentives towards being neutral, since the player isn’t required to worry about losing out on major abilities or plot points. Usually the only drawback to having a good or bad reputation is losing particular companion characters.
- The Role-Playing System – Coming up with a name for this system is tough. But all it means is that rather than offering a small number of choices which have more profound effects on the broader narrative and are tracked through points, the game offers a wider variety of dialogue choices in return for having no overt morality check in the game. Instead, while choices are good or evil or neither, their effect can often be more ambiguous (or even accomplish nothing at all). Instead, the focus is on giving the player a wider variety of options for how they want to play – this means that there are more flavors of good and evil than just “selfless saint” and “sadistic murderer” – and how they build their character. The game relies more on the player’s imagination and investment. Examples here include Fallout: New Vegas (which technically had the Karma system from before, but was mostly gutted) and Fallout 4, and the Elder Scrolls games. You might also add the alignment system from Dungeons & Dragons and games built off of that alignment system, which allows players to select their morality from the start and encourages them to play according to that morality.
- These systems can require more work on the front-end, since more dialogue options need to be written for various encounters. But they also make controlling individual encounters much easier. Rather than needing to worry about how one choice will affect an interaction later on, the developer can essentially isolate these interactions. The major drawback here is that it can also mean that player input can feel meaningless, to the point that these choices can boil down to who completes the quest, rather than how the quest gets completed. But it can make up for that by letting players feel that the character they’re playing is more real.
- The Isolated Choice System – This system is the most limited in terms of giving options to the player, to the point it might not even feel like a moral choice system at all. The point of these systems is to focus “moral choices” on a small set of options presented at particular points in the game. Sometimes these are plot points, sometimes they’re mechanics for determining player upgrades. Usually this system is used in games that have RPG elements, rather than full-fledged RPGs. What happens in these systems is that at some determined point you are faced with a set of options, usually two, and each option has some immediate and occasionally longer-term effect on your character or the story. Examples can be found in the later entries of the Far Cry series (3, 4, and 5, specifically) and Bioshock.
- The obvious benefit here is that there’s even less branching that needs to be taken into account. Because the underlying game usually only has RPG elements, the main character either already has an established personality or is nothing other than a blank slate, and your job as the player is to make choices depending on whatever criteria you prefer. When the narrative needs to branch out, those changes can be small – such as changing the ending you get – or can be brief diversions that lead the player back to the same main path.
Where Do Games Go Wrong?
Now that we have a typology of moral choice systems, let’s examine where they tend to go wrong.
Greyness
The obvious thing to point out is how games tend to favor binary choices. That is, there’s a good choice and a bad choice. While you can choose a neutral option or mix up your approach, games often discourage players from doing this. So a common point of contention is to say that developers need to better incorporate neutrality and moral ambiguity more generally. Rather than compelling players to be all-good or all-evil, games should give make each individual choice feel relevant. Occasionally, you may see Knights of the Old Republic 2 and The Witcher games as examples of getting morality right, or at least doing it much better than other games.
I think this criticism is correct, but misses a core failing of moral choice systems more generally. Put another way, even if games better incorporate greyness, they will still fall short.
The problem is that morality in and of itself tends to work within this binary spectrum. Open up any book on moral philosophy and you’ll see that same dichotomy. It will have different terms: good vs. evil, right vs. wrong, virtue vs. vice, justice vs. injustice. These are all different ways of talking about the same basic subject, but they all rely on two endpoints with some kind of middle ground.[1]
Okay, but if we expand that middle ground and explore it more thoroughly, that will fix the problem, right? Not really. That middle ground is still only going to hold value as long as it exists as middle ground. Which means we still need to be able to adequately define the good and evil on the two opposing ends.
Mostly, what makes this greyness appealing is just that it’s different, without necessarily being good. Even KOTOR 2 and The Witcher still fall short. Because their defining feature is that try to point out how blindly pursuing good or evil isn’t the only choice, and maybe isn’t a such a great idea. Which brings us up to the level of introductory moral philosophy, not to expertise. It feels like a breath of fresh air compared to what we are used to, but the air is still stale.
Let’s put it another way: let’s imagine a world where games start to more fully explore “moral ambiguity” by making neutrality a more viable, sometimes even the preferred, option. More and more games stop pushing players toward good or evil, and toward remaining neutral. The result of all of this is that we’ll be back at square one. All games with moral choice systems are going to feel the same, and we’ll be back to complaining about how annoying these systems feel.
Delayed Gratification
You’ve probably heard of this famous psychology experiment. A researcher brings a small child into a room and puts a marshmallow on a plate. The researcher leaves, but before doing so tells the child that if they don’t eat the marshmallow, when the researcher comes back the child will get two marshmallows. It’s a classic experiment about patience.
Most games tend to present moral choices in this same light. Good and bad choices are often presented through the lens of immediate gratification or delayed gratification, usually with the result that the delayed gratification is ultimately greater.
Allow me to provide the example of Bioshock. In Bioshock, players need to collect a resource called ADAM. This resource is necessary for buying various upgrades. You collect ADAM through characters called Little Sisters. You have a tough fight against an enemy, at which point you interact with the Little Sister and are given a choice: you can “save” the Little Sister, which gives you a moderate amount of ADAM, or you can “harvest” the Little Sister to get twice as much ADAM.
So it seems like you’re being asked to either handicap yourself to be good, or do evil to give yourself an advantage, right? Except with every three Little Sisters you save, you get a gift. That gift contains a nice bundle of ADAM, plus some unique upgrades and resources. The difference in the amount of ADAM you get once you account for these gifts is almost miniscule, and you lose out on the extra stuff if you choose to harvest instead. So now it’s pretty much just about temporarily handicapping yourself, and then getting a bigger reward later.
Which captures the basic idea behind the marshmallow experiment, but makes for an uninteresting moral choice system. Because there’s no real difference between the morally right choice and the rational choice.[2]
The problem isn’t that these two things should never coincide. In fact, many moral philosophers try to argue that “true reason,” if we could all understand it, would tell us that we would be happiest being moral. The problem is that the system doesn’t require any real thought. We generally aren’t impatient children when we play games, and can often guess when we’re being given a delayed gratification problem.
A good morality system needs to make choices of good and evil feel different in their impact. Namely, it needs to tap into what the player actually wants to do, or believes is right and wrong, rather than just tapping into their ability to carry out a cost/benefit analysis.
By way of an example, I’ll point to a part of one of the most important philosophical texts: Plato’s Republic:
Fairly early in the book, one of the characters in the dialogue wants to know whether there’s any reason to be a good person if there’s no reward for it. So he poses the following question: imagine a person who is perfectly just, that is, they believe in always doing the right thing, but the entire world perceives them as being unjust and treats them that way, so they end up being punished, tortured, and killed; conversely, imagine a person who is perfectly unjust, that is, they are always hurting and taking advantage of other people, but they always get away with it so the entire world perceives them as being just and treats them that way, so they end up with power and respect and money and everything else they could ever want.[3]
Which is it better to be? The idea is that if morality is actually valuable on its own, we should want to be the perfectly just person.
Trying to explore morality through that lens would lead to a much more interesting outcome, because it would require us to really sit and think about why we are moral. Delayed gratification does not ask us to really think about morality in any way.
Moral Bank Accounts
One of the core problems with how people often think of morality is that good and bad actions can essentially “cancel” out one another. If you do something bad, then by doing enough good actions you can essentially “make up” for that bad action. This view isn’t inherently wrong, as the idea that people can change and become good is an important part of the theory of rehabilitation. The problem comes when good actions are treated not as a rough road towards a goal, but as indulgences.
A brief history lesson to explain this term. A part of traditional Catholic teaching is that certain actions can reduce the amount sin you must suffer in the afterlife. Sometimes these can be good works that you yourself participate in (prayers or charitable actions), and sometimes these can be good works done on your behalf by someone else (an example might be praying for a deceased family member). These works are called “indulgences.”
During the Middle Ages, church officials began abusing this system by “selling” indulgences. If, say, you were on your deathbed and worried about the afterlife because you hadn’t exactly been the most upright person, then you might receive a visit from a pardoner who would offer you salvation if you simply repented and left a large bequest to the Church. These kinds of abuses went on for a while, fueling in part the Protestant Reformation. Once they were eventually curtailed, the scar was left on the language itself, as we commonly associate “indulgence” with partaking in simple pleasures that we might otherwise feel guilty about.
Why bring this up? Because that old “selling of indulgences” ends up being precisely how many games orient their morality systems. Different actions end up giving you good and bad alignment points, which can then be cancelled out by later action. So you can kill an innocent person and earn evil points, only to turn around and donate money to a charity and be regarded as a paragon of virtue. And, of course, the same holds in reverse. A vast number of good actions can be undone by a small handful of bad actions.
Of course, the reason that systems are set up this way is that it’s hard to imagine another way of keeping track of these things. How else might a game keep track of your good and bad actions? It is almost obvious to hand out points for each action, and associate more good points with good alignment, and more bad points with bad alignment. And how do you compare those good points and bad points? Well obviously you should treat them like positive and negative numbers, and just add them all together. Whatever you have as your sum tells you whether you’re good, bad, or neutral.
But this setup treats morality like it is a bank account. You essentially have “credit” (good points) or “debt” (bad points). You have the choice to hold onto or spend that credit as you see fit, so that you can do some bad stuff without having to be a bad person. Or if you’re in debt, you can find a way to repay that debt so that you can work your way back up to being neutral, or maybe even being good. This problem brings us back to the greyness issue from earlier.
This bank account system essentially suggests that all that matters is what people are like right now. Your past actions don’t matter about who you are. But when you stop to think about it, that doesn’t really make sense. Sure, people can change, and it might be a good idea to care more about how people act now compared to how they acted in the past. Yet, we can’t just pretend those past actions didn’t occur. So if someone commits murder, and then feels regret and tries to be a better person, we tend to regard them in a much more complicated way than just “they do good things now, so obviously they’re just a good person.” It’s a much more complex weighting of their past and present selves. However, this weighting isn’t being captured by these point systems. Instead, all the game can do is look at our moral bank account and give us a label depending on how big our balance is.
Virtues
Games can be weird. The player character is often encouraged to search every nook and corner for valuables that might be useful later on. Which can often mean barging into a stranger’s house, rummaging through their cabinets, and then chatting with them to see if they have a quest they’d like to offer.
Of course, some games try to account for this by marking certain items or containers, telling the player that attempting to open that box or take that book would be theft. Usually, stealing has a negative consequence – if you’re caught doing it. But if you can get away with it, you get free stuff. And that free stuff can be valuable, which can lead to a strange set of incentives where the main character can be a hero who saves a damsel in distress and then breaks into her house later to steal her enchanted ring.
The problem is that games don’t really have a way of figuring out this problem. From a moral standpoint, the answer is clear: you’re a bad person if you go around stealing from other people, even if you also save the world (see the moral bank account section above). But games can’t quite handle that, so they’re forced into one of two paths:
- Any time you steal, you get evil points, even if you’re not observed doing it. That makes intuitive sense, because you’re a bad person. But it makes less sense from the game’s perspective, because it means characters will start to treat you differently based on information that they don’t actually possess. It’s as though some divine being is continually pointing a finger at you for being a thief.
- Stealing while being observed leads to earning evil points and often some immediate punishment (e.g. calling the guards on you). This makes sense in explaining how people might learn that you’re bad, but it brings us back to the problem that now the game regards the player as good, despite the player actually being bad or morally neutral.
The issue stems in part from limitations of the platform itself. It’s hard to come up with a clear way of distinguishing between bad things done publicly and bad things done secretly. Because part of the point of a morality system is not just keeping track of the player character, but how the world reacts to the player character. And secret actions, by their very nature, shouldn’t impact how the world reacts, unless by some method that secret is revealed.
But more broadly, the issue stems from how we often think about morality. We tend to commonly associate moral goodness and badness with actions. But just as important in many cases are what we might call “virtues.” Virtues are essentially “dispositions” or “habits.” To understand the distinction, most people think that if you do good things, then you’re a good person. The philosophy behind the virtues is that the causal arrow is reversed: when you’re a good person, you do good things.
This distinction may seem odd, and appear like it’s suggesting that people are innately good or bad. But that’s not the point. Instead, it’s to illustrate that we should think about why we do good things, and not just whether we do good things.
A useful example: two shopkeepers sell wheat.[4] They both have scales to weigh the wheat that customers buy. Both of them keep their scales “honest,” meaning the scales aren’t rigged in some way to make things appear heavier than they actually are (which would mean the customers were being cheated). One shopkeeper keeps his scales honest because he’s fearful that he might be caught if he cheats his customers, and thus be punished. The other shopkeeper keeps his scales honest because he believes it’s the right thing to do.
Are both men equally good? If we measure goodness solely by actions, as video games do, then the answer is “yes.” But it’s likely you might have thought they aren’t equally good: the second shopkeeper is better, because he’s not just doing the right thing, but doing it for the right reason. That is what we mean by virtue. The second shopkeeper is an honest guy, and so he tries to deal honestly with his customers.
It’s hard for games to capture these virtues, though, because they can’t really ask you the player why you’re committing certain actions. All they can really account for is the actions themselves. But in turn, that means games necessarily miss out on what morality means. They’re handicapped in how they’re able to measure morality and determine whether the player character is a good or bad person, and in turn figure out how the world should react to the character.
Omniscient Characters
I’ve mentioned a few times already how good and evil actions often impact how characters perceive you and how the world as a whole reacts to you. Developers certainly want to make sure that there is some kind of feedback system for your actions, otherwise the moral choice system will feel pointless. But the problem is that how the player character is perceived is almost always dictated by their reputation, and that reputation applies everywhere.
Let’s set aside townspeople recognizing you even if you completely change your face, hair, and clothing, as though they can tell who you are by your soul. That can of course be a fun quirk of the game’s limitations to point at, but it doesn’t address any serious issues about morality and reputation. Instead, it’s just a constraint of the game world itself.
The bigger issue is how moral reputation commonly has to be applied universally. Saving a town from a dragon can net you a bunch of good points that increase your reputation, and people in the next town over call you a hero. That, of course, makes sense. Mostly. Enough so that you don’t have to really think about it.
But now imagine that instead, you do a bunch of small quests to help out various farmers and small families. You earn enough good points to increase your reputation. You then travel across the world map and people treat you as a hero. This doesn’t really make sense.
We could go into fine detail about information networks and how people share information, but that would be holding developers to an absurd standard to recreate that kind of knowledge among NPCs. That’s ultimately not the problem.
Instead, the problem lies in how the game monitors your reputation more broadly. Enough small good actions lead to the same outcome as one gigantic good action, and the game and the characters in the game treat you the same way based on those actions. Because, of course, all the game can effectively monitor is how many points you have and your reputation level. Occasionally, a game might have competing factions, where different characters in that faction will treat you differently depending on how the faction views you. But that is merely the same problem on a smaller scale.
What happens is that the game applies a universal (or near-universal) effect for your reputation. Everyone (or nearly everyone) behaves the same way. Sure, certain characters directly affected by your deeds might have more specific comments on those deeds. But everyone else reacts the same way: if your reputation is good, all shopkeepers will give you a discount, no matter whether they should be aware of your reputation or if they as a character are selfish and shouldn’t care about a customer’s reputation. This omniscience of the characters in the world make it difficult to take any morality system seriously, because it lacks a serious component of realism, one which admittedly is very difficult, perhaps even impossible, to properly capture.
What Would a Good Moral Choice System Look Like?
So if these are the problems, how would you go about making a good moral choice system? Unfortunately, I don’t really have an answer to that question. However, I do have a few thoughts that might be of some use.
Keep in mind: Most games are not being designed to explore morality.
This rule is important to remember. For players/reviewers/critics, it’s useful to keep in mind because it helps to explain why moral choice systems tend towards simplistic rulesets that don’t really accomplish anything interesting. Most developers aren’t putting these moral choice systems in because they want to examine what morality means, or how players relate to making moral choices in video games. The moral choice systems are serving some secondary purpose. This rule is true even in games that highlight making moral choices: those systems are more about providing content in some way or encouraging players to play the game (sometimes even play the game multiple times).
There are, of course, games that are indeed constructed around the problem of trying to explore something about morality. These investigations don’t always result in good systems, but they do at least help push us forward a bit. Examples of games that try to investigate morality, especially from the perspective of how the player relates to the moral choice system, are Ultima IV, Black & White, Fable, and Undertale. In addition, games that are dedicated to investigating morality, but from a narrative standpoint, include the Witcher series and KOTOR 2 (as examples that include elements of player choice), and SOMA and The Last of Us (as examples that don’t include player choice).
So if a developer wanted to know how to make a good moral choice system, a good thing to do would be to step back and ask: “what’s the point of having a moral choice system in your game?” If the answer has to do with some other element of the game, such as offering choices or branching narratives, then it might be worthwhile to ask if the game even really needs a moral choice system in the first place. In other words, the moral choice system doesn’t necessarily belong.
Instead, you should be answering this question with what you want to say about morality, or how you think players should interact with the game on a moral level. There’s no guarantee that the result of this process will be a success. But it at least helps push your game in the right direction, whether it is by abandoning the system in the first place, or towards developing a system that is actually important for the game.
Keep in mind: Properly capturing the complexity of morality is really tough.
This should hopefully seem obvious at this point, but it’s useful to reiterate. If you think it’s easy, try reading some moral philosophy. It’s easy enough to react to a situation – whether real or hypothetical – and decide what you think about it. It’s much harder to A) ask yourself why you had the reaction you did, B) step back and examine whether that reaction is right, C) come up with a coherent set of principles to determine how you should react, and D) compare that set of principles with other sets of principles.
Most people just don’t care to do this stuff. And they don’t really need to (at least, in-depth conversations on the underlying principles of moral philosophy are rare in our day-to-day lives). So it’s no surprise that most developers don’t devote a whole lot of time to investigating this subject in detail either.
But if you, as a developer, intend on really exploring morality in your video game, then you really need to do some reading. And I’ll warn you: it’s tough. I know because I teach it for a living.
However, a small silver lining to that problem. There are a lot of unique problems in moral philosophy that are ripe for exploration. You could try solving one of the problems laid out above, or you could also examine questions about personhood, generational justice, loyalty, liberty, and so on. There are so many questions that feel small and esoteric within the context of moral philosophy, but can be given new life when examined through the lens of games and narratives. You just need the willpower to dig through to find the gold.
And now I bring myself to the most important point:
Keep in mind: Many of these problems are the result of constraints of video games themselves.
I’ve been talking about all of these problems, but a common thread throughout many of them is that making a good moral choice system is going to run up against the limitations of making a video game.
The amount of information that needs to be gathered and accounted for to provide players with a meaningful set of moral choices, figure out what effect those choices should have in both the short term and long term, and control for all sorts of variability about how the player will explore the world or play the game, is all astronomical in proportion. It’s pretty much impossible to really incorporate all of these systems in any satisfactory way.
In a way, this might feel freeing. A really good morality system isn’t really viable with current technology, or even near-future technology. So rather than worrying about making a morality system that is perfect, it’s more useful to think about how to make a morality system that can work within those limitations. Often the process of making video games is running into problems and figuring out how to work around them. So approaching the topic of a good morality system as a puzzle in need of a solution can lead to more productive outcomes.
At the end of the day, it’s about trying to get players to think deeply about moral questions. You want players asking themselves about their choices, not simply on a rational level (did this choice min-max my skills or lead me to the best endings), but in terms of what is actually right or wrong. Did I make the morally right choice? Why did I make that choice? Why am I doing whatever it is I’m doing? You want those choices to feel impactful not just on the story, but for the player. Put another way, you want players to feel good when they make good decisions, and feel bad when they make bad decisions, without them needing to be rewarded or punished. The more thought you put into the systems and how players will relate to them on this level, the more likely you are to construct a system that can address some of these problems.
Further suggested reading on morality in video games:
https://screenrant.com/video-game-morality-systems-best-worst-problems/
https://www.newyorker.com/culture/culture-desk/the-computer-game-that-led-to-enlightenment
https://www.denofgeek.com/games/the-problem-of-morality-in-videogames/
Further suggested reading on moral philosophy (definitely not an exhaustive list):
Aristotle – Nicomachean Ethics
Immanuel Kant – Groundwork for the Metaphysics of Morals
Jeremy Bentham – Principles of Morals and Legislation
[1] The only exception you’ll really find is the philosophy of Friedrich Nietzsche. But it’s hard to make a morality system based on Nietzschean philosophy, since to some degree that would be an oxymoron.
[2] In a game like Bioshock, which is trying to criticize Ayn Rand’s Objectivism, this unity of morality and rationality undercuts the critique a bit.
[3] If you’re curious, you can find this material in Book II of the Republic, at sections 360a-362c
[4] This example is built in part from a similar example made in Immanuel Kant’s Groundwork for the Metaphysics for Morals.