Why Oblivion's NPC Rampage Matters For The Future Of AI Development

We're in the early days of widely accessible, large language model-trained AI and this level of public exposure has led to a lot of questions and concerns being raised about the technology, the ethics around it, and its potential impact on humanity. Despite things like ChatGPT grabbing headlines in recent months, AI isn't a new concept. Code designed to think has been around for years, and one place it has always been popular is in video games.

Advertisement

Video game AI used to be — and, for the most part, still is — very basic. In an FPS or action game, enemies will move in patterns and target you with a certain degree of accuracy if they can "spot" you. In a sports game, they will use the skillset of that sport with varying degrees of proficiency, usually based on an actual athlete that is represented within the game. In an RPG, things are a little bit more complex. An RPG world needs to feel real, so many developers code a complex routine into the non-player characters (NPCs). They'll have jobs, go to lunch, go to bed at a certain time, and they may even take a stroll in a certain location on certain days. But even this can all be seen as a touch routine. So close to 15 years ago, one well-known games company decided to take things a step further.

Advertisement

Radiant AI was supposed to be a game-changer

Back in the early 2000s, Bethesda Softworks, which is famous for the "Fallout" and "Elder Scrolls" series, decided to take this concept a bit further; it developed something called Radiant AI that was said to give NPCs actual freedom. The game world's computer-controlled people would make decisions based on their pre-coded personality and context. If an upstanding citizen was hungry, they would go and buy food. If someone less morally upstanding needed something to eat, they'd steal it. That kind of thing.

Advertisement

As Todd Howard explained during 2005's E3 Expo, characters would not have scripted routines, places to be, and sleeping patterns. Instead, they would have a set of "general goals" and have to work it all out for themselves from there. Dynamic conversations, based on the player's actions or current events in the game, were also included. Over 1,000 NPCs were added to the game, all of which ran on the Radiant AI system. Even some animals, like a bookstore owner's dog, used Bethesda's revolutionary AI. The NPCs would also have the same skill system as the player and could train and raise those skills.

In theory, this could have revolutionized gaming. A truly dynamic and realistic world with "living" people in it could provide tons of extra immersion, and even ramp up replayability as no two playthroughs would be the same. Unfortunately, as Bethesda found out, freedom comes with consequences.

Advertisement

Some NPCs took things too far

Bethesda created a world where NPCs' sole existence was achieving their daily goals through whatever means were available to them. Anyone who's played "Morrowind," "Skyrim," "Oblivion," or any other Bethesda game will also know, the worlds they create aren't stuffed with moral people. There are bandits, conmen, murderers, thieves, drug addicts, and many other unsavory characters around. When those kinds of characters are left to their own devices, things don't go well in Cyrodill. 

Advertisement

Radiant AI had to be nerfed, as the developers discovered it led to a lot of deaths. NPCs were killing other NPCs for drugs, money, and items they wanted. Sometimes a good character would find a dubious one stealing from them and proceed to beat them to death. In short, with unrestrained Radiant AI implemented, the game turned into a bloodbath. 

If large numbers of characters die, lots of quests tend to break, which is a problem in an RPG. The AI system was also very resource heavy, which consoles like the Xbox 360 couldn't handle. Add in the plethora of bugs caused by the AI, and Bethesda wasn't really left with much of a choice. The system that made it into the actual release, and every one of the company's AI systems since, was nowhere near what was promised with Radiant AI.

Advertisement

So what can AI developers learn?

Major names in the tech industry, like Tesla's Elon Musk, have been espousing the dangers posed by AI for years. Many of Musk's specific criticisms involve AI being used for military purposes. Bethesda's Radiant AI fiasco could prove he has a point. Instead of seeing it as a bit of design misadventure by an ambitious developer, you could see it as a small-scale test study. When you give an AI a goal to achieve and strip away moral boundaries, you end up with wide-reaching unintended consequences. The morally dubious characters in "Oblivion" didn't really value the lives of other NPCs. Achieving their goal was more important, so you ended up with absolute carnage and a broken world.

Advertisement

Don't start panicking just yet. AI developers also seem aware that their software could be used for nefarious purposes. That's why the likes of Google Bard, ChatGPT, and Microsoft's Bing Chat are all pretty heavily regulated. Each company has an almost too pious "code of ethics" that can frustrate users but does reduce the possibility of the tech being linked to an actual crime. There will undoubtedly be AI-focused laws and regulations, both nationally and internationally, that are put in place as the technology develops. Hopefully, things are kept in check before someone is confronted in an alleyway by a digital skooma addict.

Recommended

Advertisement