Dead Internet Theory Explained: Is The Online World Just A Mirage?

A curious concept has emerged in the vast expanse of the web — Dead Internet Theory. This idea questions the very foundation of the internet as a whole. Is the internet, with its myriad websites, social platforms, and digital interactions, nothing more than an elaborate illusion? It's easy to find yourself at the intersection of innovation and skepticism, exploring not just the advancements but also the enigmas that come from the digital landscape. Dead Internet Theory is one such enigma, a captivating idea that has gained traction in online forums and discussions, sparking debates about the nature of our interconnected existence.

Advertisement

The theory posits that the internet, far from being a bustling realm of real-time interactions and dynamic content, is, in fact, a static, lifeless void of bots — a vast digital graveyard where your every click, scroll, and keystroke merely echoes into the void without any meaningful impact. It challenges the wider perception of the internet as a vibrant, living entity, suggesting instead that it is an elaborate front concealing a profound emptiness.

This theory extends beyond skepticism about the authenticity of online interactions; it touches on the fundamental nature of the Internet as a medium for human connection, information exchange, and collective consciousness. Is a digital existence a shared experience, or is everybody traversing their own solitary path through a digital wasteland, isolated in engagement with the void?

Advertisement

The theory and its origins

At its core, Dead Internet Theory questions the nature of online interactions, suggesting that the vibrancy perceived on the web may be nothing more than a façade. The roots of Dead Internet Theory can be traced back to the Macintosh Cafe forums, in a post from a user called "IlluminatiPirate." In January 2021, the user made a manifesto-length forum post detailing the theory as they saw it. IlluminatiPirate claimed to have pieced the theory together after time spent on 4chan's paranormal board, /x/, and on other paranormal forums.

Advertisement

The theory comes down to the idea that sometime in the past five years, the internet died and is now filled with AI-generated content. In the vain of other conspiracy theories, the Dead Internet Theory posited by IlluminatiPirate leads back to the U.S. Government and influencers they supposedly pay to keep up the illusion. Evidence includes anecdotal accounts of changes on the web. The theory found fertile ground in online communities where individuals, grappling with the paradox of hyperconnectivity, began to ponder the substance of their online engagements.

Philosophically, Dead Internet Theory draws inspiration from the age-old question of the tree falling in a deserted forest — if an action occurs without an observer, does it make a sound? In the context of the digital landscape, the theory poses a parallel inquiry: if people engage with content, share thoughts, and interact online without genuine reciprocation or impact, does their digital presence hold any true significance?

Advertisement

Fact or fiction?

Like many conspiracy theories, you need only use your powers of observation to find the whole thing demonstrably false. That being said, an early 2021 forum post about generative AI taking the internet by storm doesn't sound as farfetched now as it may have been then. The internet is now a place filled with ChatGPT-generated blocks of text, AI-narrated Reddit threads plastering your social media feeds, and AI-generated images purporting to be real.

Advertisement

But, the issue at hand with Dead Internet Theory wouldn't necessarily be content that you know is AI-generated. The issue would be the content you don't know is AI-generated. Following the original forum post, there wasn't much web traffic generated around the idea until an article in The Atlantic about the Dead Internet Theory coincided with a huge leap in web traffic around the idea.

Of course, the increased interest in the theory would seem to disprove the theory on its own. The alternative possibility is that the supposed AI horde that makes up the internet is coming dangerously close to self-realization. Jokes aside, you know people who use the internet. Your friends have other friends that frequent all sorts of sites that you may not even use or have heard of. Surely, there's no way a bunch of fake traffic passes right by you without you ever noticing.

Advertisement

A grain of truth

Therein lies the trouble with the theory. Like every great conspiracy theory, there is a kernel of truth buried in the center of the more outlandish ideas. This truth that the plurality of web traffic is due to bots which comprise 52% of all traffic online. However, the mechanics of that stat are more complicated than an army of AIs attempting to deceive you into believing that the World Wide Web is full of real people. Rather, this bot activity can roughly be broken into a grouping of "good bots" and "bad bots," the latter of which unfortunately makes up the majority of that 52%.

Advertisement

Good bots account for things such as feed fetchers, web crawlers, and monitoring bots. Feed fetchers are bots that help to populate someone's feed on any given service. A web crawler is a bot that scans through all kinds of websites to create more relevant search results on search engines. Monitoring bots, as the name implies, monitor traffic to a site to potentially identify threats. All bots serve vital functions to the web, and while they do make up a bulk of traffic on the web, they aren't acting to deceive humans.

However, "bad bots" aren't actively looking to deceive human users, either. Bad bots include bots used in DDoS attacks, unauthorized data scrapers, and spambots. So if none of these bots are trying to appear human in a front-facing capacity, what's the issue?

Advertisement

AI and language learning models

Unfortunately, AI advancement has led to more and more content proliferating online that is 100% produced by AI. While some of these AI accounts are more obvious than others, such as the celebrity doppelgangers that are Meta's AI Chatbots, others aren't labeled but do actively seek to pass as not AI.

Advertisement

Language learning models, such as ChatGPT, have been leveraged to create an army of bots that can complete any number of tasks. The primary usage seen has been the spread of information, often as misinformation. There was a particular interest among these AI-assisted botnets in spreading misinformation about cryptocurrency. The advent of sophisticated language models has revolutionized content creation. AI-generated articles, social media posts, and comments seamlessly integrate into digital spaces. However, the very nature of these models raises questions about the authenticity and diversity of the content on the web.

The perpetuating nature of language learning models does call into question one curiosity from the original Dead Internet Theory post. In that post, IlluminatiPirate called into question the amount of news that seems to infinitely perpetuate, such as news about "unprecedented" or "unusual" things to do with the moon or an asteroid. However, there isn't anything inherently malicious about a potential network of AI-powered spam bots proliferating info about the moon. It might be the least concerning possibility of AIs, as there are examples of bots across various social media platforms being used for things less innocent than lunar facts.

Advertisement

Reddit

AI-generated content and Reddit have become hand-in-hand. While not on Reddit itself, content from Reddit has become a hit on TikTok and similar platforms like Instagram Reels and YouTube Shorts. The videos in question generally pull from AskReddit threads or posts from Subreddits such as Nuclear Revenge, Off My Chest, and Relationship Advice. The content of the video is an AI voice reading the text of a thread, usually overlaid on Minecraft or Subway Surfers gameplay. However, the site itself isn't free from the negative outcomes of AI. While AI-generated content can make its way onto Reddit in the form of generated articles and comments on posts, that isn't the biggest issue the platform has faced.

Advertisement

Language learning models need data sets to learn from. One of the largest datasets available for how people write to one another on the internet is Reddit, a site based almost entirely around short-form and long-form text posts and responses. The issue arose when Reddit decided that they could make some money from this phenomenon.

Before Reddit began monetizing its API, it had largely been free to use. This allowed third-party apps like Apollo and RedditIsFun to exist for mobile Reddit users in the days before an official app existed, and many users continued to use it in place of the official app after its release. While AI-generated content hasn't replaced human content, many platforms that existed to access Reddit are now dead due to AI data scraping.

Advertisement

The website formerly known as Twitter

One of the websites that has been impacted the most by generative AI is X, the website formerly known as Twitter. The previously mentioned botnet that spread misinformation about cryptocurrency was found on X, where over 1,000 AI-enabled bot accounts were being used to spread misinformation related to cryptocurrency.

Advertisement

This web of bots interacted with each other through reposts and replies to bolster interaction numbers, while also stealing selfies from real people to appear as more authentic accounts. As sinister as the whole thing sounds, the botnet wasn't exactly airtight. The study that uncovered all of the accounts in the botnet tied them together with accidental posts made on almost all of the accounts involved. The posts in question usually contained variations of the phrase "as an AI language model." That phrase in particular is what you're likely to have spat out at you if you ask ChatGPT to do something that violates OpenAI's policies.

However, Dead Internet Theory's view towards X tends to focus on the nature surrounding viral posts on the site. The Atlantic piece that helped to popularize the idea of dead internet theory at large mentions very frequent usage of the phrase "i hate texting" across X. The proliferation of this content that didn't really differ from other posts in meaningful ways led many to feel like they were interacting with bots.

Advertisement

Elon Musk's X Crusade

Spam accounts have always been an issue on X, whether automated or not. The issue of spamming and bots on X was the focal issue raised during Elon Musk's bid to buy the company. And although Musk came to own the site, his crusade against bots and spam hasn't been a smashing success. One of the more infamous examples of bot accounts on X is t-shirt bots. The bots in question generally are prompted by certain keywords being used in response to an image, such as the phrase "I wish I had that on a shirt," where they will then create a quick store page to buy that exact image on a t-shirt. This has led to many users prompting the bots to put inappropriate or copyrighted imagery onto t-shirts.

Advertisement

Although Musk's bid to buy the site was born from the idea of cracking down on bots and spam, there is no evidence to support a reduction in bot activity since his takeover. A recent analysis of a million posts found that bot activity on the site is worse than ever. The larger issue faced by Musk's X when it comes to bot activity is paid verification, which allows bot accounts to gain verified blue checks. This ability for spam accounts to have a seal of verification makes it harder to distinguish them from genuine accounts. This aligns perfectly with the idea of the vast majority of online activity being bots.

YouTube and fake views

One final piece of Dead Internet Theory has to do with analytics. YouTube, with its algorithm-driven recommendation system and the promise of democratized content creation, has become a central arena for digital expression. However, the pursuit of visibility and success on this platform has given rise to a shadowy market where creators and entities seek to manipulate the perception of popularity through fake views. This practice not only distorts the metrics that shape a wider understanding of online content but also raises questions about the authenticity of the connections forged on social media.

Advertisement

Fake views, often facilitated by automated bots or click farms, create a façade of popularity that can mislead both audiences and content creators. This illusion contributes to an echo chamber, as content with inflated views may receive disproportionate visibility, overshadowing diverse perspectives and authentic voices. The motivations behind the pursuit of fake views are varied. For content creators, the allure of increased visibility, potential monetization opportunities, and the perceived validation of success drive the desire to artificially boost view counts.

On the flip side, the reality is that they just want money. One man was reported to have made over $200,000 in a year by selling fake views on YouTube. Another site selling YouTube views reportedly made over $1 million in three years selling views.

Advertisement

Of the bots, by the bots, for the bots

Dead internet theory posits that most of the internet is made up of fake bot accounts. There are many bot accounts out there that generate a majority of web traffic, and generate content on social media sites. They also consume that content. However, the internet of the bots, by the bots, and for the bots isn't exactly a dystopian conspiracy. A more pragmatic assessment reveals that the internet as it currently stands, far from being a sterile wasteland, is a dynamic space where real connections, meaningful discourse, and genuine engagement do occur.

Advertisement

The integration of artificial intelligence and language models within the digital landscape adds complexity to the narrative. While it's essential to scrutinize the ethical implications of these technological advancements, it's equally important to recognize the positive contributions they bring, enhancing accessibility, and fostering innovation.

The internet, with its vastness, diversity, and potential for genuine human connection, remains a testament to the resilience of digital spaces. While challenges exist, they are not insurmountable, and the evolution of technology continues to present opportunities for creating a more authentic and meaningful digital experience. While the internet could eventually be 99% AI-generated, that doesn't necessarily mean fewer people. It just means more bots.

Advertisement

Recommended

Advertisement