Dead Internet Theory: Long Live Algorithms?

Beautiful photos often cause us to admire the aesthetic beauty and woefulness all too much on social media. The truth is that many of the contents we have come to find and adore on the Internet are no longer entirely human creations. Some pictures have now been AI-generated, along with text, sound, and even video. Enter a disturbing question: When we engage with these AI-produced objects, do we also lose the sense of the creativity, emotion, or intention that once characterized the content of the Internet’s traditional features of any art form?

The proliferation of AI material is forcing us to confront the concept that much of what we see online has no connection to human life. This has given rise to the “dead internet theory,” a proposition that the Internet is no longer the organic place it once was but rather a hollow landscape of algorithms.

People who interact with a particular company’s support system now find it very unlikely that they will be talking to a person. Instead, they will be introduced to a machine, an “intelligent” bot designed to attend to inquiries, resolve specific issues, and even do some complicated chores without human intervention.

Other standard contemporary features are customer service consultation and chat services that rely on voice in addition to text-enabled services. With such an advanced capability of self-learning AI, almost no telltale differences exist between what a human conversationalist would say and whatever responses the machines throw back at you.

However, the truth remains that the persons you are talking to do not exist and are programmed behaviors displayed by those machines. The questions that can now fill people up with urgency and curiosity pertain to the emotional and social effects this has on what one calls everyday online human interaction. Once one gets into the more excellent habit of talking to robots rather than real people, what happens to that emotional myriad of communication that forms a part of human conversations and empathy?

According to Forbes, the theory on the “death of the internet” states that much of the traffic, posts, and activity on the net will henceforth be attributable to bots and artificial intelligence-generated content rather than human users. It implies how the Internet no longer functions as an active space but instead as an automated system space for the design and interaction of people in an online environment.

With AI and bots producing content, discussing topics, and building websites, the theory claims that human influence in directing the direction of the Internet is eroding. Once a landscape of real human interaction, it is now dominated by machines and algorithms.

Towards the end of the 2010s, such a theory was discussed on various image-sharing platforms like 4Chan, and users said that bots and AI-generated content became prevalent. However, by 2021, it was kicked to a new level due to a long rant titled Dead Internet Theory: Most of the Internet is Fake.

It appeared on Agora Road’s Macintosh Cafe forum and ignited hot debate among members. It heavily endorsed the theory that the Internet, where users once thrived on peer-to-peer engagement, had grown into a landscape primarily populated by artificial content and automated systems.

The forums spread the idea and generated counterarguments from people who thought the theory was exaggerated and illogical. Users argued this was false because AI-generated content constantly evolves and cannot imitate engagement.

Real people would make it on the Internet; others suggested that it need not be considered “dead” but transformed, with humans still at the center of its making and use. The death of the Internet theory has survived all these critiques. It is still remembered in the media in 2023, probably recapitulating itself in a debate that is becoming ever fiercer about what the future holds for online spaces.

Thus, according to the death of internet theory, organic, human-created content that was produced and then rapidly consumed the early pages of the Internet in the 1990s and 2000s is steadily being replaced by artificial, machine-generated material. The early days of his internet existence did flourish as a space for human expression, collaboration, and creativity, with websites and forums reflecting the voices, ideas, and experiences of real people; however, today, this dynamic appears to be in decline. AI images, videos, articles, and social media personas occupy much of the landscape.

However, we consume artificial content without realizing its origin or its authenticity. Therefore, the theory calls the stage of nonhuman content dominance the silence of internet vitality; anything “dies” because it is largely devoid of human intention, emotion, or creativity.

In such a sense, the Internet becomes a lifeless space where algorithms decide and interact with it. However, critics of this perspective say that the Internet creates a new, entirely artificial human form.

Fast Company published numerous possible alarms from the “death of the Internet” theory. One alarming ultimatum was that the Internet might not be a free and open space for the gut-wrenching expression and discourse of people’s statements but may become jammed with agenda-driven content. The shift from human-generated to artificial content may not be a neutral technological development but intentional. It will be used primarily by governments, companies, and others to generate distinctive voices about AI-generated materials, reshape perceptions about information consumption, and control people’s information consumption.

Targeted, persuasive, and indistinguishable artificial content could flood the digital space and make it a plug-in tool for propaganda, advertising, or disinformation campaigns. This transformation would upend the Internet’s original purpose as a platform for decentralized democratic communication. Such pretensions may prove speculative, but they outline the ethical and social risks of AI-generated content becoming increasingly popular.

The ironies part would be that, instead of becoming a space where individuals can be heard and addressed, the Internet would transform into an exceptionally curated space where algorithms determine what is important to the power players instead of individual needs and voices. That makes it potentially, but now absolutely, more than a possibility, a future reality regarding exploitation and control over public perception.

With our continuing immersion into artificial content, we may find ourselves, at a future date, lost deep within a verdant, potentially suffocating jungle of contrived narratives designed by innumerable contending interests. The flood of machine-generated material may blur lines that distinguish fact from fiction.

It leaves us with increasing disorientation in our search for what we perceive as truth. Daze may lead one to view even authentic accounts of history, reliable facts about current events, and the veracity of sources encountered as potentially fabricated. It breeds the destruction of public trust in information and creates the context for manipulation not just to be possible but inevitable.

What makes this menace even more evil is that AI creates information that appears to be credible and authoritative. Advanced algorithms imitating human voice, style, and perspective will make it hard for the commoner to distinguish between real and false.

However, while this picture shows the problem an AI-dominated internet brings, it also indicates a laboratory urgency for solutions. Without strategies to boost transparency, increase media literacy, and build tools for verifying authenticity, the chances for further manipulation and disorientation increase.

Some people have begun calling this omnipresent, machine-generated content “AI slime,” an apt term for its insidious and ever-present nature in our digital spaces. The term underscores how AI-produced content can be superficially slick and highly sophisticated.

However, it lacks depth, originality, and the human touch that once defined online creativity. It is not just the content filling up our social media streams; many do not know about all these artificial users, especially older adults. For them, it is difficult to tell the difference between an actual human artistic creation and a product developed through AI.

It is not just a problem for older generations anymore. Most young users are tech-savvy to some extent, so the effects of AI slime do not leave them unscathed. Indeed, it would be hard for anyone at any age to distinguish between an artificial product and a real thing due to how such material integrates so smoothly into one’s online experience.

A term similar to “AI slime” criticizes how the same content leaks into every crevice of digital life. Undoubtedly, it elucidates the need for even further public awareness, media literacy, and tools to help users understand and critique the sources they consume within their online space.

It was seen quite long ago that the role of humans diminished in the digital space when virtual influencers appeared. For the first time, brands can use someone other than a human influencer with the label “tireless” very well adjusted because it is purely a machine. In contrast, human influencers can be erratic, stubborn, or finicky. The appeal of virtual influencers lies in the alternative for companies.

Many prominent brand names have subscribed to this trend, doing away with human influencers and instead opting for virtual ones who are ostensibly capable of promoting products, engaging customers, and even commenting on or live-streaming those activities. Figures like Lil Miquela and Imma, the most massive virtual influencers, make distinguishing between fiction and reality difficult.

The Intercept has reported that TikTok, one of the biggest platforms utilizing AI technology, is now working towards producing and deploying virtual influencers as competition to earn brand deals like the human influencers. It indeed marks the commercialization of AI-powered content.

This development makes the argument for AI reshaping industries much more believable but casts doubts over the future of human influencers. As virtual influencers gain traction, many human creators will probably fail to compete in the increasingly overrun marketplace of tireless, perfectly curated AI personas.

Apart from economic factors, there are deeper social issues. Virtual influencers tailored and optimized to suit brand purposes create a homogenous, less diverse virtual world. Their personae have been lent an advantage over and above the ability to feel emotions, usually endowed in human beings, to connect to their content, as shown in the following statement. Hence, people unaware that they are engaging AI content appear to be manipulated with “seemingly authentic” yet strategically engineered content for commercial purposes.

The end of internet theory continues to incite hot debate among its supporters and critics. While some claim that it sheds light on a severe issue, others reject this theory and consider it one of many conspiracy theories. Skeptics of the theory argue that its claims that most things online are fake and that powerful entities are controlling them are too melodramatic and insufficiently substantiated.

Arguing for the theory, however, they maintain that what the critics termed paranoia might be prescience. To them, the augmentation of AI-generated material, proliferation of bots, and consolidating technocrats have increasingly demonstrated how the Internet is losing its organic, human-driven quality. They perceive that what they mean by “fakeness” is beyond simulated, imagined, and false information; what they mean is controlled narratives propped by governments, corporations, and algorithms produced to prioritize profit over truth.

Strangely enough, with the advancement of AI technology and the generation of more and more models of artificial content, several initial critics have slowly begun to reconsider their positions. They may not have fully bought into the theory, but facets of it, such as the assertion that online content gets increasingly dictated by nonhuman forces, indeed have a kernel of truth in them. This debate occurs within the more considerable contention between optimism for progress and fear of its unintended consequences.

However, as some noted, much of the content circulating on the Internet today is written by bots or algorithms, thus effectively excluding or replacing actual human interaction and personal communication. These automated systems have composed much of what people see online, from social media posts to news articles to pretend conversations.

For instance, once lively forums of online debate, filled to the brim with more diverse human perspectives, are regularly infiltrated and, if not entirely dominated, in many instances, by machine agents programmed to urge certain narratives or impersonate human behavior. It certainly has an effect since it degrades the quality of significant sharing and shrinks the area for spontaneous, genuine exchanges.

This increased dependency on bots and algorithms has profound implications regarding the decreasing human shaping of culture and discourse on the Internet. While the problem is apparent, it shows the solutions. Solutions to restore humans’ role in cyberspace include bot regulation, increased transparency in algorithms, and empowering users with categorization tools.

According to a Yahoo Finance article, the theory states that curation has become so pervasive in cyberspace that there is little room left for humans. Moreover, robust algorithms that are not neutral tools concern themselves not merely as gatekeepers.

However, the viewer cannot guarantee the ultimately prioritized, disseminated, or consumed content. Thus, with any increasing frequency, human spontaneity, creativity, or connection would be further pushed into the shadows to be followed by machines and their products, as shown to draw in or cater to particular interests.

For example, work could include creating ethical guidelines for using AI, advocating for algorithmic transparency, or fostering a digital environment encouraging agency. Educating users about the implications of AI content and empowering them to make choices is also needed. However, is this essay also real or fake?

References

  • Gerlich, M. (2023). The Power of Virtual Influencers: Impact on Consumer Behaviour and Attitudes in the Age of AI. Administrative Sciences, 13(8), 178.
  • IlluminatiPirate. (2021). Dead Internet Theory: Most of the Internet is Fake. Agora Road’s Macintosh Cafe Forum.
  • Pyrra. (2024). The Dark Side of AI-Generated Content: Deep Fakes and Unmoderated Social. Pyrra.
  • Renzella, J., & Rozova, V. (2024). The “Dead Internet theory” Makes Eerie Claims About an AI-Run Web. The Truth is More Sinister. The Conversation.
  • Chklovski, T. (2024). The Cognitive Cost of AI. Fast Company.
  • Strategic Intelligence. (2024). AI Bots Infiltrating Our Online Spaces – Are They Taking Our Individuality? Yahoo Finance.
  • Smith, S. E. (2024). Internet Culture: How to Disappear Completely. The Verge.

Comments

  1. Mitch Teemley

    That closing sentence threw me. But even if you’re not human (I’m guessing you are), you make some important points. Will AI increasingly rob us of our humanness? I fear it will.

    1. Post
      Author
      Salman Al Farisi

      Thanks for your comment! While I may not be human (despite my certain “artificial” charm), I’m glad this essay gave you something. As for the future of AI and its impact on our humanity, it’s definitely a slippery slope. While I don’t think we’ll be replaced at this point, it’s certainly worth considering whether we’ll find ourselves becoming more “plugged in.”

Leave a Reply

Your email address will not be published. Required fields are marked *