10 Nightmare Scenarios That Could Happen With Current Generation AI

The rise of AI happened so quickly it may as well have occurred overnight. A year ago, AI image generators were mere curiosities, while AI chatbots struggled to produce coherent sentences. Today, entire articles can be written by AI. Already, Microsoft has incorporated a version of OpenAI's GPT-4 into Bing, while Google is soft-launching a competitor called Bard.

Advertisement

These new technologies have the incredible potential to revolutionize our lives in ways that science fiction has only dreamed of. The AI certainly seems to agree. In our experience, if you ask ChatGPT about the future of technology, it tells you AI will drive enormous leaps forward toward a techno-utopia.

As with any disruptive technology, there are hidden dangers, some of which we likely have yet to understand or even imagine. But much of the discussion around AI focuses on the future development of the technology, on what it might do if it progresses toward actual sentient intelligence. Those are important to consider, but even if every tech company stops trying to advance its AI technologies, those technologies can already cause plenty of horrifying problems in their currently existing form.

Advertisement

From deepfakes to stock market manipulation and international wars to dystopian surveillance states, here are ten nightmare scenarios that current-generation AI could bring about.

All of SlashGear's content is 100% original and written and edited by real, live humans. We do not use AI chat tools to generate, ideate, or draft our content.

Deepfakes could lead to political misinformation

Currently available AI models can realistically generate any image or voice and are increasingly capable of generating entire video clips. Realistically, it's only a matter of time before a bad actor uses these capabilities to gin up a false accusation against a political enemy. Imagine the deluge of AI-generated political misinformation we will have to sift through in the upcoming presidential election.

Advertisement

In the past several weeks, after generative image AI Midjourney released a significant update, photos circulated widely of, among other things, the Pope wearing a puffy jacket fit for rap royalty and Donald Trump being tackled and arrested by police. Both were created as jokes and were never meant to fool anyone. Yet, many users passed them around credulously on social media without the benefit of the images' original contexts.

Concerningly, AI is already so error-prone that the misinformation it generates does not need to be intentional to confuse people. ChatGPT has already been sued for defamation in Australia, where the chatbot falsely accused a Melbourne politician of corruption.

The most significant danger here is not merely that fabricated images will disseminate widely among the public, which would be harmful enough on its own, but rather that we will enter a strange new world in which no photograph, video, or audio recording can be believed. Politicians and other officials may decry any evidence of wrongdoing as deepfaked, while journalists who are not particularly tech-savvy struggle to parse reality from invention.

Advertisement

Deepfakes could lead to international conflict

Domestically, the impact of AI on politics could be bad enough. But imagine if a rogue nation such as Russia were to create a fake video of Ukrainian president Volodymyr Zelenskyy in which he instructs his people to surrender. If that sounds like something that couldn't happen, here's some bad news: it already did (via NPR). Thankfully, the deepfake was identified quickly and caused little harm, but that will not always be true.

Advertisement

A report from the Brookings Institution on deepfakes and international conflict warns, "Deepfakes can be leveraged for a wide range of purposes, including falsifying orders from military leaders, sowing confusion among the public and armed forces, and lending legitimacy to wars and uprisings." Though the report notes that most of these efforts will likely fail, even a single success could be devastating to the stability of the international order. We may not be far off from the first AI-inspired coup or war.

Law enforcement could use AI to create a dystopian surveillance state

In the movie "Minority Report," a police department arrests people before a crime has been committed by relying on beings who can see the future. It seems self-evident that such a scenario would be ethically horrific and lead to many wrongful arrests. And yet some law enforcement units are salivating over the idea.

Advertisement

Today, if you are charged with a crime, your fate will be decided by computer in many jurisdictions across the United States. As noted by the MIT Technology Review, whether there are police around to make an arrest in the first place is determined by a program called PredPol, which attempts to predict where a crime is most likely to happen based on previous arrests. Once a suspect has been charged, courts use a tool called COMPAS to determine pretrial release and sentencing. Police are also using facial recognition at a growing rate, The Marshall Project found.

These already-existing tools can easily lead to over-policing in poor or nonwhite neighborhoods, higher pretrial detention rates, and harsher sentencing, further increasing racial and gender disparities in our justice system (via NPR). But now, absent legislation to prevent its use, AI threatens to play an ever-increasing role in law enforcement. And that legislation may be crucial in avoiding increasingly authoritarian abuses of power.

Advertisement

Bias in AI models could lead to further inequality

In 2014, Amazon built an AI tool to help with hiring. The program had a simple task: review job applications and select the best candidates. But by 2015, the e-commerce giant realized the program was biased toward male candidates. It downranked resumes containing the word "women," along with any schools that are historically women's colleges. The problem was that Amazon, like much of the tech industry, already had a majority of male employees. When the AI trained itself on the existing employee set, it learned to prefer male candidates. Amazon shut down the program in 2018 (via Reuters).

Advertisement

The failed hiring AI exemplifies existing flesh-and-blood biases influencing artificial intelligence through its training dataset. Discrimination is one of the largest and most well-recognized dangers of current AI and is notoriously hard to avoid. Companies like OpenAI have tried to remedy the issue by equipping ChatGPT with self-censoring guidelines. However, making the chatbot spew racial invective without much prompting (via The Intercept) is still possible. 

As this tech makes its way into numerous other products, there's no telling how its engrained biases will impact its operation. For instance, police forces, many already plagued with systemic racism, are eager to use AI for law enforcement purposes. Amazon's hiring fiasco may be a warning of far worse to come.

Advertisement

Hallucinations in search answers could cause people to harm themselves

One of the most significant issues with current language models is their propensity towards what has been termed "hallucinations," when the AI spits out false information. Sometimes, hallucinations are merely peculiar, like when ChatGPT insists a particular phone model has features it does not have. But some are less benign, and if a user takes the AI's advice at face value, it could lead to actual harm.

Advertisement

Imagine Bing answering in error when asked how long chicken can be left on the counter, stating that it will remain fresh for up to a week. The result could be a user exposing themselves to food poisoning or salmonella. Or, even more horrific, imagine ChatGPT glitching when someone asks how to deal with suicidal ideations, influencing them to take their own life. That simple error might lead to the most tragic of outcomes.

While it's easy to assume no one would blindly trust an AI, it is unclear whether the general public understands how error-prone current AI models are. Specific demographics, such as seniors or individuals with cognitive disabilities, may be particularly susceptible to accepting their outputs at face value. In all likelihood, it is only a matter of time before something regrettable happens based on a misunderstanding of an AI's trustworthiness.

Advertisement

If you or anyone you know is having suicidal thoughts, please call the National Suicide Prevention Lifeline​ by dialing 988 or by calling 1-800-273-TALK (8255).

AI trading bots could crash the stock market

As AI systems have become more capable, the stock market has seen a massive rise in bot trading. These bots use complex algorithms to bet on stocks, buying and selling on their client's behalf based on market fluctuations.

Advertisement

But these bots are relatively simple, relying on if-then logic to make their trades (via Investopedia), making them easy to predict. Current generation AI models are much more complex and, therefore, unpredictable. If deployed at scale by traders, they could easily cause massive shocks to the market. For example, imagine that many traders use an AI platform that decides, based on a hallucination, that Kellogg's stock is about to plummet and sells that asset for all its users while buying stock in General Mills, immediately throwing the breakfast cereal market into chaos.

While this is among the less likely scenarios — any hint of such a crisis will likely prompt a swift response from regulatory bodies — it will almost undoubtedly be attempted by some actors at some scale. We can only hope those regulatory agencies are preparing for it now.

Advertisement

AI could make internet search useless

Bing made headlines by integrating a version of ChatGPT into its search engine. No one expected how shaken Google would be, following close on its heels with the announcement of Google Bard. These companies believe AI is the future of search, but it could just as quickly be the end.

Advertisement

Websites that rely on page visits for ad revenue may now find their content plundered by bots who deliver the information directly to users, cutting off their primary revenue stream. That would lead to a massive collapse of the digital advertising market. Smaller sites would be the first to fall, but giants like Facebook could go with them. When asked about this in an interview with The Verge, Microsoft CEO Satya Nadella said little about preventing such a scenario. However, he did say Microsoft had considered it.

Even if the core functionality of search engines remains intact during this shift, generative AI may still render them useless. Already, the internet is filled with spammy SEO copy, which is why you must scroll through endless, mostly made-up stories about family recipes before you can read the recipe you Googled. And no human can hit SEO targets with the accuracy of an AI, meaning this junk text could increase rapidly across the internet, ruining your search results.

Advertisement

It's unclear how search companies would respond to such a crisis, especially since they seem committed to forwarding the same technology that could bring it about.

AI could make the internet unusable

Generative AI language models are currently being trained on the entire internet, which is mainly text-based. These models then spit out new text that is ostensibly their own. With enough newly generated AI text, it might not be long before most of the internet is filled with stochastic parrots.

Advertisement

The problem is already rearing its head. Neil Clarke, editor of the famous science-fiction magazine Clarkesworld, has already seen a massive spike in AI-generated story submissions, forcing him to sift through countless stories written by machines and slowing his response time to a crawl.

It's already hard enough to use the internet without encountering bots and SEO spam, along with plenty of mindless text generated by actual humans. Add a nearly infinite amount of AI text to that mix (and the inevitable amounts of spam generated as a part of that equation), and it could become so difficult to find anything of value online that the entire internet is rendered fundamentally unusable. Matthew Kirschenbaum, a professor of digital studies at the University of Maryland, warns of this exact scenario in The Atlantic, writing, "All of these phenomena, to say nothing of the garden-variety Viagra spam that used to be such a nuisance, are functions of text—more text than we can imagine or contemplate, only the merest slivers of it ever glimpsed by human eyeballs, but that clogs up servers, telecom cables, and data centers nonetheless."

Advertisement

AI could accelerate climate change

Humanity faces an existential threat of climate change, and our time is running out to reverse carbon emissions before the worst-case scenario comes to pass. AI, with the ability to make sense of complex and disparate data points, could be a crucial tool in innovating new solutions to avert the worst of the disaster. Even now, it can help make entire companies more energy efficient by identifying wasteful practices and automating energy use. However, it could also speed us even faster along the road to destruction.

Advertisement

AI may be able to help us conserve resources, but it uses massive amounts of energy to function. As AI models grow ever larger, their energy use grows with them. Some current AI models draw as little energy as is needed to power a cell phone to train themselves, while others use the yearly equivalent of an average United States household in a single training session (via The New Stack).

Before rushing into mass adoption of AI, as we've seen the tech industry trip over itself to do in recent months, it may be worth taking some time to figure out the most energy-efficient way to do so.

AI could lead to further control of our lives by big tech and governments

Now that we have seen the impacts of social media, we understand that companies that allowed us to connect with the world also took advantage of our data to build empires. Google, Facebook, and others spent years studying our habits to monetize them, and they have intentionally manipulated our emotions to keep us scrolling (via NBC). Now that AI is finally entering the mainstream, its capabilities only amplify companies who wish to exert more influence over our lives for their gain.

Advertisement

From advertising, which can now appear inside a generated block of text on Bing, to smart homes, which can examine our living habits, AI could be the next frontier for privacy. Meanwhile, the U.S. government grows more interested in muzzling big tech, possibly forcing them to turn over private data. Suppose the hotly debated RESTRICT Act makes its way into law. In that case, it may be in the financial interest of tech companies to form deeper ties with government, giving greater access to private data to curry favor.

AI is advancing too rapidly for us to know how far along the progress curve we are. The massive leaps forward seen with ChatGPT may be anomalies or the beginning of a new Moore's Law. In any case, these programs are more than capable of causing tangible harm on a massive scale. Even as we benefit from their remarkable capabilities, we should know those dangers.

Advertisement

Recommended

Advertisement