- June 17, 2020
- Posted by: AIandGames
- Category: AIandGames
The Last of Us is one the Playstation’s most revered titles. Players must guide Joel and Ellie through a world left in ruin, as rogue factions of humanity war with one another all the while the fungal brain-altering virus that has swept the globe transforms the infected into violent mindless creatures. It’s a game driven by cinematic spectacle, of human stories set against an inhuman reality. And while players take charge of cynical and embittered Joel, artificial intelligence helps put together the rest of the dramatic performance, be it ally, enemy or infected. In case study, we’re going to explore the inner workings of The Last of Us: the design philosophies that drove its development, the AI technologies adopted and how developers Naughty Dog crafted an experience where players were made to feel the emotional weight of every enemy kill.
About the Game
The Last of Us is a third-person action adventure game with a large focus on cover shooting and stealth. As players make their way across post-apocalyptic America, players come into contact with two types of opposing forces: Hunters – the humans who patrol and control regions of territory around the country and the Infected, the mindless crazed creatures that are all that remain of those infected by the fungal plague.
As the game began development in 2009, one of the earliest design principles was ensuring that players recognised the dramatic impact of taking another life in a world built atop those that had fallen.
“When we started prototyping the human enemy AI, we began with this question: How do we make the player believe that their enemies are real enough that they feel bad about killing them? Answering that one question drove the entire design of the enemy AI. Answering that question required more than just hiring the best voice actors, the best modelers, and the best animators, although it did require all of those things. It also required solving an AI problem. Because if we couldn’t make the player believe that these roving bands of survivors were thinking and acting together like real people, then no amount of perfectly presented mocap was going to prevent the player from being pulled out of the game whenever an NPC took cover on the wrong side of a doorway or walked in front of his friend’s line of fire.”Travis McIntosh, “Human Enemy AI in The Last of Us“, Game AI Pro, Volume 2, Chapter 34, 2015.
Hence there’s a need not just for the hunters, to appear intelligent, coordinated and ruthless, but also for the Infected to feel just as if not more threatening thanks to their more chaotic and aggressive behaviour. On top of all of this, there is of course Ellie: the young woman who Joel is tasked with providing safe passage across the country to the Fireflies. Ellie is in many respects the players avatar within the story. Unlike Joel who is all too aware of the horrors of the outside world, Ellie has no idea what awaits beyond the quarantine walls of Boston. She reacts to the world and the drama that unfolds as and when it happens and it was critical that players developed a relationship with her in much the same way that Joel does.
Before I can explain how the different AI characters behave in The Last of Us, I need to take a moment to explain the underlying architecture that they’re built upon. But more critically, I need to explain why Naughty Dog built it the way that they did.
The Last of Us AI is built using Finite State Machines (FSMs), a long-established approach to crafting AI behaviours. FSMs were popularised by Half Life back in 1998 as means through which to structure individual intelligent behaviours as individual states. This means a character could be attacking a target or searching a location until an event triggers in the game that forces the character to transition from one state of another. If you want to know more of the inner workings of FSM, go check out that video.
The AI of The Last of Us is built around the idea of Skills and Behaviours: skills are high level ideas of what a character might be doing. For a Hunter, this could be investigating a disturbance, hiding behind cover or flanking the player. Meanwhile for the Infected this could be wandering around the map or giving chase to an opponent. In each of these cases, they use smaller more specific actions in the world, be it moving to locations, interacting with or reacting to objects in the world in order to make that skill look intelligent. And that’s where behaviours kick in.
Behaviours are specific concrete actions that all characters might execute, but how they do it will differ from one another. Hence if a character is moving from A to B or attacking the player with a melee attack, how they complete those actions and the animations performed will differ if they’re a human hunter or an infected clicker. The idea is that behaviours are reusable and it’s only when they’re executed by different characters you see how the exact same action is performed differently.
Each skill – which acts as a state within the finite state machine – contains its own state machine comprised of behaviours. This hierarchical FSM is a well-worn technique for behaviour management and I discussed how this is still used in games as recent as the 2016 reboot of DOOM back in episode 30. But the critical part of all this is that each skill and behaviour is modular and contained in and of itself.
By building a more modular and de-coupled system, it allows for a lot of iteration by designers without overwhelming the programmers with feature requests or the need for bespoke tweaks in certain parts of the code. This allowed for a lot more development energy and time to be focussed on playtesting each character type, ensuring it works as intended, refining features, rapidly prototyping new ideas, or just outright scrapping ones that weren’t working. All the while striving to achieve their design goals I mentioned earlier.
“The best way to achieve these goals is to make our characters not stupid before making them smart. Characters give the illusion of intelligence when they are placed in wellthought-out setups, are responsive to the player, play convincing animations and sounds,and behave in interesting ways. Yet all of this is easily undermined when they mindlessly run into walls or do any of the endless variety of things that plague AI characters. Not only does eliminating these glitches provide a more polished experience, but it is amazing how much intelligence is attributed to characters that simply don’t do stupid things.”
“As a general rule, characters don’t need complex high-level decision-making logic in order to be believable and compelling and to give the illusion of intelligence. What they need is to appear grounded by reacting to and interacting with the world around them in believable ways.”Mark Botta, “Infected AI in The Last of Us“, Game AI Pro, Volume 2, Chapter 33, 2015.
But even with a good suite of tools available, it doesn’t mean that the game will come together without continued experimentation. As Naughty Dog sought to achieve their design vision, numerous elements of the friendly and enemy AI were drastically altered or revised during production. With some of the enemy archetypes and even Ellie’s final behaviour system only coming together in the closing five months before the game was launched on Playstation 3 back in 2013.
So now that we know how the core architecture of the game’s AI works, let’s take a look at how each of the AI characters was designed in The Last of Us.
So first let’s take a look at the human enemies, known as the Hunters. The Hunters are designed such that each and every one of them should be a credible threat. That without consideration and care, they could easily kill you but also that they put up a fight in return and are not mere cannon fodder. The Hunters are designed to appear coordinated, to systematically hunt down and eliminate the player all while caring for their own personal safety, but also crafted such that they communicate their behaviour, allowing the player to respond in kind.
One of the first things to talk about is how the Hunters and other AI can detect the player in the world. The Hunters have both vision and audio sensors to detect disturbances in the world, they run on what is arguably the default audio and visual sense levels, and as we’ll see later, the sensitivity of sensors is changed quite drastically for each of the four Infected archetypes.
First up let’s talk vision, NPCs use a variety of view cones – a point discussed heavily in recent blog posts – for detecting the player within the space. The Last of Us originally used the same view cones adopted by Naughty Dog for the Uncharted series, but this didn’t really work. Players were spotted too quickly at distance but also were largely on noticed at close range. They didn’t fit the pace of the game, and hence the view cones used in The Last of Us, much like Splinter Cell: Blackist are not cone shaped. While in Splinter Cell they look more like a coffin, in The Last of Us it’s sort of like a keyhole, but in each case, it’s the same concept: both provide greater peripheral vision while distant vision is much narrower. In addition, like other stealth games, the player isn’t spotted immediately upon standing in the view cone, you have to stay there for a period of time before the Hunter will see you, typically 1-2 seconds, with it typically being shorter in combat as the Hunter to reflect the higher state of awareness. In addition, much like Splinter Cell any NPC that has the player in their view cone runs an additional site test for the detection timer to increase, each NPC runs a raycast from them to a position on Joel’s body to see if anything blocks their vision. Originally a character would run raycasts to each joint in Joel’s body, but it was rather inefficient. Eventually, the team created two conditions, one where the raycast is aimed at the centre of Joel’s chest if the player has not been detected yet, or the top of his head when in combat. It’s simple, but it was found to work really well. In addition to this Hunters can hear noises at different levels of severity and priority, but I’m going to come back to this when I discuss the Infected, given sound is more critical to them than vision.
Referring back to the Skills discussed earlier, the Hunters have a variety of different skills they can execute (see above). As we can see here, most of them are built around combat, with ranged and melee attacks, flanking, and advancing behaviours. But most combat sequences in The Last of Us start with the player in stealth mode and it’s only if they’re detected do many of these skills kick in, so let’s focus on the two that are critical for stealth: Investigate and Search.
Investigate is a behaviour used when a Hunter is checking out a disturbance, this could be a bottle or brick that’s been thrown making a noise, or they see a flash light in the distance. Meanwhile Search is when the player has been detected or the Hunters already know the player is nearby and they start to systematically explore the world to find you. Each of these behaviours rely on three key pieces of information:
- A Combat Coordination System that gives roles to each character, deciding which behaviours to execute.
- A Navigation Map which shows the fastest way to navigate around the world within proximity of the character.
- An Exposure Map that sits on top of the navigation map that shows information about what the NPC can see from their current position.
- And lastly a Cover Point System that identifies not just good cover points for combat, but also points for playing specific animations and behaviours.
So let’s walk through how these systems allow for Investigate and Search behaviours to work. When a Hunter needs to execute the Investigate behaviour, they request that the Combat Coordinator give them the role of Investigator. The system limits how many of a given role is assigned, hence ensuring if you throw a brick that 5 enemies don’t all investigate it at once. So while one NPC will become the investigator, others may stand around or continue as normal. The NPC with the investigator role will then call Cover Point system for what is known as an open post: this is a location near the point of interest that satisfies specific criteria. As we’ll see in a moment, the system can be used to request a post for going into provides good tactical cover, but in this case, it’s a location that can be reached through the navigation system that is a good place to run the investigation animations.
Meanwhile for a Search behaviour, the big difference is that the NPCs nearby already know that the player is nearby, they just don’t know where. This utilises the coordination system to have NPCs move around the map and explore it, but how they explore it needs to be systematic. If they just wander around in a clump, then it won’t look realistic or game the player a challenge. Hence the game relies on the Navigation and Exposure Maps I previously mentioned. The navigation and exposure maps are grid that sits atop the navigation mesh – the data structure that allows NPCs to calculate paths through the environment. The navigation map allows for quick and cheap calculations of whether a path exists to locations in the immediate area around an NPC, while the exposure map shows what parts of the world nearby are visible. Using this data, the system generates a search map, which shows the areas of the exposure map that are not visible but also can be reached on foot. This tells the NPCs what areas of the world need to be explored because they have no coverage of them at this time. At this point, the coordination system then sends NPCs to search those spaces, be it around corners or behind cover. Hence if the player stays in one spot, the Hunters will eventually hunt you down, forcing you to keep moving between points of cover. In both cases, if the Hunters then spot the player, they will go into combat, so let’s walk through how the cover is selected and how the combat coordinator keeps the enemies working together.
Once the player is spotted, we return to the more combat focussed skills available to the Hunter. If they want to Flank the player, go into gun combat or advance to tactical locations, they need a rich understanding of where the player is, what areas provide good cover, and where the player is aiming at that point in time. And the systems I previously mentioned help bring that together. First of all an enemy might need cover, how do they know where is best for them? Again the Cover Point system is used, but the criteria have changed. First of all, we don’t want an open post in the world, we want a cover post that gives the character some protection. The game runs a calculation of the 20 closest cover points in the map within radius of the character, the game then runs 4 raycasts per piece of cover to assess whether the player could shoot the character from that position. If it determines it’s a safe location, it is then ranked based on the type of cover requested, whether there is a path to reach it and doesn’t require the AI to walk in front of the player to reach it, and also whether it is a good place to hide out or attack the player from. And then it simply picks the cover post with the highest score. This custom calculation means that a post that is useful now might be deemed useless 5 seconds from now as the player moves around, so future post calculations will reflect the pace of the battle.
Now with the cover established, how do the Hunters better coordinate their attack? One of the first things that happens is that the game creates a reference to the location of the player. A data packet is generated that retains the location of the player and the timestamp it was generated. This is useful to measure how long it’s been since the player was last spotted, and should another NPC see Joel then a new data packet is generated. Whenever one of these packets is created, it is shared to other NPCs in proximity as means to communicate where the player is. Hunters will then begin to advance towards the players location, some taking cover, others charging right towards you. Meanwhile others may well take a flanking maneuver and catch you off guard. The Combat Coordinator balances this by assigning roles to each of the available NPCs, this roles include Flankers, Approachers, OpportunisticShooters and StayUpAndAimer. As with the previous Investigator role, a character will be assessed on their validity for the position. In the case of the flanker, the game calculates what is known as the players combat vector: which is the direction that the player is currently in combat. Using this vector, and the navigation tools, a flanker will be considered valid if they have a path that allow them to sneak up on you and does not intersect with the combat vector, making that flank all the more surprising when it happens.
This entire process works well for the most part, but is heavily reliant on the configuration of the environment. If it’s a tighter and enclosed combat area, it will ensure the player is often forced to fight (either stealthily or with gunfire) fairly quickly and combat will feel hectic and dynamic. However, it struggles in larger combat spaces and areas with greater verticality, because it is easier for the player to lose the enemy and force them to repeatedly search for the players new location. This is rather evident in the courtyard fight in Pittsburg where Ellie provides Overwatch, as well as the assault on the fortified houses in the Suburbs. This is an issue that was addressed to some extent during development, as the game will force Hunters to converge on the players old position immediately should the player move more than 5 meters from the location they were last spotted, forcing the search skill to kick in again much faster. But it’s still possible to give them the slip.
Now that we know the inner workings of the Hunter characters, let’s dig into the inner workings of the Infected. Unlike human Hunters, there are different classes of Infected whose skills and even their sensory systems differ from one another. There are the Runners, the fast-moving and vicious creatures that often attack in groups. The Stalkers, who are fast-moving and often ambush the player in darker regions, the Clickers who are completely blind and rely on their ability to hear the player to hunt them down and lastly the Bloaters, blind and slow-moving, but heavily armoured and require serious firepower to take down.
Outside of their appearance and more frenzied melee attacks, the thing that really separates the AI of the Infected from the Hunters is their emphasis on sound. As mentioned, both the Bloaters and Clickers are blind and as a result can only react to audio stimulus. But also the Runner and Stalker have limited vision compared to regular humans, meaning it takes them longer to spot you. To compensate, the Infected’s audio sensors are up to six times more sensitive than Hunters, meaning players really need to focus on stealth and keeping their distance.
So let’s walk through how sound works in The Last of Us, when a sound occurs in the game, such as the smashing of a bottle or even the player’s movement, it generates a logical event in the game world. This event is broadcast over a radius assigned by designers. In the case of the infected, the radius is multiplied by a tunable value for each character archetype. A notable example of this is player movement, given that the Infected are far more sensitive to movement sounds, and the faster you move, the radius of the sound event increases in scale. As a sound is broadcasted, the NPC that intersects with the radius runs a quick occlusion test by running raycasts of the local environment, to see whether other objects such as walls or surfaces may have blocked it, meaning that while it is in the radius, the noise wasn’t actually loud enough to be heard.
Now, these logical sound events are generated for the vast majority of in-game events, with a real focus on movement and combat mechanics or in-world items such as generators or vehicles. However, there are a handful of invisible audio events that generate sound in the game world that players don’t hear when playing the game. The most interesting example is that – while you don’t’ hear it in-game – Joel emits a very low-level sound event for his breathing. And is designed to help Infected find the player if they’re hiding in very close proximity.
To counteract this, players can throw bricks and bottles to create audio distractions but you can be crafty and try other approaches, such as throwing molotovs – which can lure in and kill a blind infected with ease or – rather strangely – strangely use smoke bombs. Smoke bombs, as the name implies, create a cloud of smoke that will obscure the vision of NPCs as well as the player. It’s ideal for breaking line of sight with hunters, but in theory, is useless against the clicker and bloater given they react solely to sound. It’s a decision that arose from in-game testing, but yes if you throw a smoke bomb it not only blinds characters trapped in the cloud but also makes them deaf. In addition, during developed the infected reaction to molotovs and the damage scales were reduced given you could easily wipe out a horde of infected by throwing a molotov in the centre of the room and all of them would run towards it – given they don’t coordinate their responses as hunters do. For the final game each molotov can only affect a couple of non-player characters at once.
So let’s take a look at the Infected’s skill set. Unlike the Hunters, the Infected skills vary for each archetype but once again are ordered by priority. This makes sense given that the Bloater is more focussed on ranged combat, while the Stalkers are the only class that can ambush you and catches you off guard. But they all have a lot of common ground, such as Sleeping undisturbed, Wander-ing their local environment, Search-ing for the player and also On-Fire – which is the highest priority skill of an infected.
By default, an infected will Wander given it’s their lowest priority skill. A designer can decide for each infected whether they follow a patrol, where they visit a series of interaction points in the map, or they’re allowed to move randomly. Random movement does maintain a history of previously visited polygons on the navmesh, so as to minimise backtracking. One critical thing to note is that if an infected is on a patrol path only to be distracted by a noise or nearby combat, if it is pulled too far away from the original path, then it will proceed on a random wander afterward. This makes them all the more predictable and also prevents the unrealistic behaviour of a blind clicker doubling back to the exact same path it was on two minutes ago.
Given so much of facing off against the Infected is stealth-based, how do characters such as the Clicker and Runner Search for the player if they hear something nearby? Infected do not search for the player in the same as Hunters do. They’re less methodical in their approach and also less exhaustive and will return to their wandering behaviour after a time, but it creates something that feels more frantic and terrifying. Their search skill is focussed upon visiting a location of a disturbance, be it a sound or an estimation of the players current location. But it triggers a pre-built behaviour unique to the Infected known as Canvass. This is a special search behaviour where an infected will quickly and unpredictably turn and observe it’s surroundings. This behaviour is – like all others – tied to the available set of animations that a character has, but it uses those animations to dictate how the character will look around.
When canvassing an area, an infected generates a grid over the local navigation map that shows what parts of the local environment it hasn’t looked it. Think of it like the search map of the Hunted, it’s a rather similar process. The infected then looks at the ‘search’ animations it has available, such as turning its head or swinging its body around to face a
particular direction. For each animation, it calculates how much of the ‘unseen’ space it would ‘see’ if it ran that animation at this point in time and picks the one that provides the best coverage. It repeats this process for a period of time and will resume a wander or idle behaviour afterward. It creates this unsettling expressive performance and keeps the infected from feeling too predictable, really selling the terror of the situation.
The Infected provide a completely different combat experience from the Hunters, but much like their human counterparts, they emerged from heavily focussed playtesting during development. The Stalkers and their ability to ambush the player – the only infected behaviour that utilises cover – only emerged in the closing months of development. The sleep skill reduces the sensitivity of an infecteds sensors but is used to create sentries around chokepoints and other awkward geometry. Infected do not have any ability to communicate with one another, but you might have noticed they sometimes follow each other, either while exploring disturbances or attacking the player. This is thanks to the follow skill, which if one infected is heading to a location with purpose, another infected can decide to follow it. Think of it like a conga line, where only the infected at the front knows where they’re actually going. Plus there are some tweaks to behaviour to balance the difficulty, when an infected chases the player, they will periodically stop running the move behaviour and instead use the canvass behaviour. This not only allows the infected to reorient itself but gives the player a small break to get away and compose themselves. Plus the clickers are made to be far less aggressive on lower difficulty levels.
In fact, as Mark Botta detailed in his chapter Game AI Pro Volume 2, the Clickers originally had a completely different implementation of their behaviour. The original versions of the Clickers used an echolocation system, where they made a noise that allowed them to build their own navigation and exposure maps like the Hunters, but built dynamically based on the sound bouncing off surfaces, much like a bat does in the real world. The Clickers would screech and bark more frequently to allow them to update their data models as they walked around. Problem was that during playtesting, it didn’t communicate well to players, given it wasn’t evident how a character that was blind could now somehow ‘see’ them.
And so having explored the inner workings of all the enemy AI, there is still one last topic to cover: the companion AI systems, and most notably how it is applied to bring Ellie to life. As noted in Max Dyckhoff’s GDC talk in 2014, there was a very real concern that Ellie must not succumb to the same pitfalls of other companion AI, turning The Last of Us into a 12-hour exercise in frustrating and awkward escort missions. Before exploring the inner workings of how Ellie and other companions work, it’s worth noting that what we see in the final game was a system built during the game’s closing months of development. 5 months away from ship, the existing system was scrapped and a new one was crafted that built atop a lot of the existing tools and systems used for Hunters and Infected. The system I’m about to describe was actually programmed within six weeks, with the remainder of development time working on specific design kinks and the tuning of parameters.
In fact, one of the early playable press demos of the game in January 2013 focussed on the sequence in the tilted skyscraper after players escape Boston’s quarantine zone. As players of the game will know, in this sequence Tess and Ellie will lead or follow Joel through the building but the combat sequence with the Infected – which ultimately acts as the players first test in handling Runners and Clickers – was designed to have your companions stay back, given at that time the companion AI was unfinished and not meeting Naughty Dog’s standards.
So let’s walk through the priorities that Ellie’s AI focusses on:
- Ensuring she stays close to Joel at all times, and finding points in the world that make sense to do so.
- Giving her useful a sense of utility, be it to identify or attack enemies.
- Making Ellie interesting as a character, given her special animations and audio dialogue.
- And lastly ensuring the authenticity of the experience, by preventing her AI from cheating.
Ellie’s positioning is one of the most important things to get right, if she’s standing too close then it prevents the player from having the freedom they require, but too far away and it detracts from the relationship the two characters have and the need for Joel to be protecting her. Hence outside of combat, Ellie will typically keep up with Joel, often just behind him, while in stealth sequences she gets in close and tries to stay next to you in cover.
In order to follow Joel successfully, the game builds a follow region behind the player, a region where it will make sense for even the likes of Tess, Bill, Henry and Sam, although these other characters typically follow farther behind then Ellie does. Once this region is established, the game uses raycasts against the navmesh to find valid follow positions. Much like the cover-point system, each position is rated for things such as distance to threats and allies, the angle relative to the player’s position, and whether it is in a good location not blocked by geometry if they continue to head forward, especially if that geometry would block their view to Joel, such as a dividing wall. This is an incredibly difficult problem to get right and while it still has its problems, it succeeds far more often than it fails.
But when the player goes into cover, this presents a completely different challenge. The cover points for the Hunters is not as richly defined in the environment as that which Joel and Ellie need to sneak around. Hence the game has a runtime cover generation system for Ellie, which generates what are known as cover action packs. These action packs are typically used for environmental interactions by the player, like interacting with an object or climbing up and down a ladder. In this case, they’re created in the proximity of the player by running 80 raycasts from the player’s position and are designed to tell Ellie which locations nearby are good points for her to hunker down. The nearby cover points found in the collision geometry are prioritised based on their distance to Joel, distance to nearby threats and whether they’re in front or behind the player. At which point Ellie will find select and move towards that cover. At first, this worked well, but it meant that Ellie was always to one side of Joel. Hence a modification that allowed for cover action packs to be generated on the point of cover that Joel is crouched behind was added. This combined with a new animation, allows Joel to shelter Ellie’s body from harm while in cover, and it helps reinforce the relationship between the two characters.
As players progress further into the game, they are frequently pushed into situations where both find themselves in combat and Ellie gradually takes on more agency within each combat sequence as she is equipped with weapons to defend both Joel and herself. One of the first abilities she earns, that is a nice touch, is that she can throw bricks and bottles at enemies, this actually cheats a little bit, as Ellie hooks into the enemy perception systems to check if they’re about to spot Joel if they are she will brandish a brick or bottle and lob it at their head. Note she doesn’t need to pick up a bottle or brick to do this.
When Ellie actually has a gun, she only uses it if either the player has instigated a weapons-free situation – by going in and shooting first or if the player is in danger. If you kill a couple of enemies and manage to hide once again, Ellie will also return to stealth and not continue the gun battle by herself. Plus on rare occasions, if she is nearby, she’ll give the player ammo and health kits. In a style much akin to BioShock: Infinite’s Elizabeth, this is tied into the inventory system, given if you are in need of specific supplies and are either running low or have ran out, she’ll give them to you. But this is actually pretty rare and doesn’t happen all that often.
Outside of all of the stealth and combat performance, there is still the need to give Ellie a real sense of character, and much of this is achieved using a straightforward approach of contextual animations and dialogue. There are hundreds if not thousands of lines of dialogue Ellie can run throughout the game, be it to react to objects in the world, spot enemies, be grossed out be dead bodies or react to Joel’s killing of Hunted and Infected. The reason there are so many is that over time in the game the set of dialogue Ellie accesses changes to reflect her growing confdience and acceptance of the situation. It’s a really subtle part of the game, but it enriches her character all the more.
While all of the priorities I mentioned earlier largely hold true, there was one that simply could not be upheld and through further iteration and testing, it made sense to break. Ellie’s AI does cheat in very specific situations, but – as is often the case in games – it’s done to minimise player frustration and in an effort to improve the overall experience. First of all, Ellie will teleport, but only if the player is pinned down by another character so she can rush in to provide support. During combat when she’s armed, Ellie’s weapon accuracy and fire rate will vary between encounters and if she shoots anyone outside of the player’s view, it doesn’t actually hurt them. However, she will frequently turn to shoot someone in Joel’s line of sight. Partially to support the player, but also to reinforce that she is actually involved in the conflict. This was – interestingly – a lot of effort to balance and tweak given in earlier versions of the game she turned into a killing machine. But perhaps more critically – and as many players have observed – Ellie is invisible to Hunters and Infected when the player is not in combat. This was to minimise the chances of a player’s attempt at stealthily sneaking past NPCs being ruined by Ellie accidentally running out of cover and giving away their position.
Naughty Dog strived to craft a game that delivered an emotionally resonant story built atop a series of tense and brutal combat sequences. It largely succeeds and speaks to the creative efforts and energies of all involved during what reads like another turbulent AAA production. Despite this, there are still improvements to be made, with co-director Anthony Newman speaking ahead of the release of The Last of Part II about improvements made to Hunter combat systems. But of course, the proof is in the pudding and we’ll see how players react as they take control of Ellie herself in the long-awaited sequel.
- Infected Enemy AI in The Last of Us, Mark Botta, Game AI Pro Vol. 2, chapter 33, 2015
- Human Enemy AI in The Last of Us, Travis McIntosh, Game AI Pro Vol. 2, chapter 34, 2015
- Ellie: Buddy AI in The Last of Us, Max Dyckhoff, Game AI Pro Vol. 2, chapter 35, 2015
- Ellie: Buddy AI in The Last of Us, Max Dyckhoff, Game Developers Conference, 2014
- Programming Context-Aware Dialogue in The Last of Us, Jason Gregory, 2014