Let’s imagine that one night, you decide to stay up late to scroll through Instagram Reels or argue with someone on X (also formerly known as). It’s what you always love to do online, especially if you’re someone who looks to interact with people. The internet is a vast place where everyone could come together, or further polarize themselves because of the constant echo chamber that they form. However, as you scroll online, you feel something in the back of your mind. It’s a little tingle, but you can sense that it’s a question. A thought that lingers:
What is the probability that the people you interact with online might not be as real as they seem? What if the internet is just some illusion that people think is real, but everything is in fact fake?
Throughout the years, many users online have thought about this question before, and posted their answers on various social media platforms. It’s not an original question, but it is one that has an effect on how we use the internet.
The Dead Internet Theory
The Dead Internet Theory is an online conspiracy that paints most posts, users, and comments on the internet as AI or bot generated content, with no presence of actual humans. This theory started in 4Chan (a social public forum) around 2016-2017, but later exploded in popularity during 2021 when a thread popped up on the Agora Road’s Macintosh Cafe by a user named “IlluminatiPirate”, who described the internet as “empty” and “devoid of people” and “content”. IlluminatePirate, a man from California, had deep conspiracies about the internet ever since Internet memes like Raptor Jesus and Pepe the Frog became popular.
The thread had attracted Katlyn Tiffany of The Atlantic, who made her own twist of the definition on an article titled “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago”:
“Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls and idly curious lovers of chitchat … But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?”
Before Elon Musk’s acquisition of X, many social media sites had people publish the same content over the algorithm, which led people to act like ‘bots’ over the same posts. And while the people themselves were not bots, they sure acted like them. This created a blur between whether or not a comment or post was made by a human, or whether it was a bot all along. The rise in popularity of the “Dead Internet Theory” suggests that there are people out there who would not be able to tell the blur between people and bots online, especially after the rise of AI.
Musk’s Acquisition of X (formerly Twitter)
On April 14th 2022, Elon Musk began the act of acquiring X (then Twitter) for 44 billion dollars, and concluded his acquisition on October 28th of the same year. As the owner of X, Elon Musk has made significant transitions to the app, which included the implementation of a subscription called “Twitter Blue”, where users were able to pay $8 a month to have a blue checkmark on their usernames. Under this subscription, users could gain a portion of the ad revenue from posting and commenting on threads.
You can imagine how this turned out. Many have implemented bots designed to post so that the people responsible for these bots get a large share of ad revenue. An Op-ed piece for The Guardian described it as a “low-stakes all-bot battle royale”. Research by the cyber security company Imperva found that “bots account for around half of all internet traffic”, specifically from bots who are used to generate fake ad revenue. Another study by AWS found that 57.1% of the sentences on the web are “machine-generated translations”. But perhaps the most discerning thing about these bots is the fact that they now act more human than ever before.
The large presence of bots on X has garnered much attention from casual X users, who feel as if the social media site has become a non-human infestation since Musk’s transition, as many have also taken the steps to leave the social media site and others entirely. One of these reasons is due to engagement farming done by numerous bots, which has affected the media, journalism, and political polarization.
The Expansion of ChatGPT and AI
In late November, Open AI released ChatGPT, an AI chatbot that follows an instruction from a prompt that a human types. Since the release of ChatGPT, as well as Google’s own chatbot Gemini, generative bots have supplied a volume of online bots to the internet from 33% in 2022 to 40% in 2023. According to the 2024 Imperva Bad Bot Report, 71% of online traffic accounts to AI bots in Ireland, while 68% of it accounts to Germany.
In recent news, Meta has launched a plan to add AI generated users to Facebook and Instagram. This has received widespread backlash by many, who further iterated that the “Dead Internet Theory” might be well true. Some even said that it’s not a “Dead Internet Theory”, but much rather a “Dead Internet Reality”.
While this may be the case, generative AI is largely detectable (for now). Some information generated by AI might be completely false, or often filled with poor grammar and spelling. So much of it might alert humans that the user may be a bot. But there is a slight inch of concern at play as well. AI has the potential to evolve greater than it already is to the point that it can act independently of human instructions, and interact with other AI bots to favor AI made content on the internet. This can allow the algorithm in social media sites to cater towards AI suggestive content rather than human interests.
Since the rise of AI, there has been an increase in bot presence on the internet. It’s important that we label what makes a user a human, or what makes a user an AI bot to further prevent the AI agents from acting more human.
The Effect on Media and Journalism
In 2024, Clemson University released a research report that showed 686 identified X accounts that have posted more than 130,000 times since January. These accounts contribute to a large army of bots on X and other social media sites. Accounts such as “MediaOpinion19” have posted on average 662 times a day. This was especially the case in November, when the presidential elections were near. Elon’s growing conservative view has transformed the platform, and allowed more pro-Trump rhetoric to spread to gain more people to vote. Some of the information that’s spreading conservative values is misinformation that’s generated from AI sites, especially from ChatGPT. Other platforms such as Reddit also have a huge bot problem, with many accounts spurring liberal opinions on current culture wars. Many users on Reddit have reported seeing the same posts on different subreddits, most being flooded with liberal comments.