Beyond the black mirror

Keir Dullea as astronaut David Bowman in “2001: A Space Odyssey” (1968)

Artificial intelligence has a storied history in the annals of cinema, from the earliest beginnings with Fritz Lang’s “Metropolis” to Stanley Kubrick’s seminal depiction of AI in “2001: A Space Odyssey.” The concept of a humanlike algorithmic intelligence is the perfect material for exploring themes of humanity, consciousness, technological overreach and capitalism. However, as the concept of AI has ventured from the realm of fiction to non-fiction, the types of impact AI will have on the world has not been fully realized through the medium. For instance, Skynet (from the Terminator franchise) is often invoked as an allegory for how development of such “thinking machines” can go wrong, as well as the more realistic scenario of HAL 9000’s programming conflicting with its directives. These are relatively plausible scenarios, by all means, but they fail to encapsulate the full horror that unchecked automation can result in.

For instance, in Stanley Kubrick’s “2001,” the core conflict concerning HAL 9000 is terrifyingly plausible—the AI begins murdering its crew despite safeguards meant to prevent that very thing—however, the situation is confined to the setting of the ship and doesn’t pose as much of an existential risk. Rather, HAL 9000 serves more as a narrative obstacle for Dave to overcome, where the core theme—humanity vs our reliance on tools—is expressed dramatically in the narrative. Now, take the other extreme on the opposite end of the spectrum where James Cameron’s Skynet wages an all-out, total thermonuclear war against humanity. That situation feels even less plausible than the HAL 9000 situation despite the stakes (global takeover) feeling more appropriate for the impact that AI will have in reality. The point is, both scenarios fail to capture the sheer magnitude of the issue.

Other films get a little closer, like the more recent examples with “Her,” “Ex Machina,” and “Blade Runner 2049,” but most of those are still concerned with the interpersonal confusion that an artificial being introduces. Still yet, other films like “The Matrix” treat the machines as plot devices meant to exposit worldbuilding so that the film’s other themes can be explored. Even more egregiously, horror films will treat the big, bad robot as more of a slasher than an omnipresent, existential risk to humanity. So then, what films do accurately portray the full existential risk to humanity like “Terminator” but with the frightening plausibility of “2001?”

Before answering that question, it’s important to consider the current state of AI development, as well as the pace of future developments and research. Currently, as of this writing, Google has released a state-of-the-art model called DeepMind Veo3 which can generate photorealistic video and audio from a text prompt. The power of these current-gen generative pre-trained transformer large language models is not in how deep their neural networks go (or how many layers of machine learning are happening) but rather how decentralized their effects are. Rather than one single mind becoming self-aware and deciding to take over the world, it’s more like a drone swarm of artificial intelligence that becomes so powerful it becomes impossible NOT to use, and thus becomes ubiquitous in throughout the entire world. The economic and spiritual effects may be devastating as entire industries are disrupted (film and tech are being hit the hardest as of this writing) and people lose not only their jobs but their sense of purpose in an increasingly digitized world. Additionally, western culture was already in a state of dire straights when it came to fake news and critical discernment pre-2017 (the year Attention! is all you need was released), so what does that mean for our society’s collective abilities to reach consensus now that we live in a post-GPT world? Especially considering the arms race condition is already initiated, both between countries and within countries (Google, xAI, OpenAI and Meta are all battling for supremacy currently.)

To be honest, I think the answer may very well be the most recent additions to the Mission Impossible series, the “Dead Reckoning” duo released in 2023 and 2025, respectively.

In the conclusion to the lengthy Mission Impossible franchise, Ethan Hunt does battle with a rogue artificial intelligence known only as the Entity. The Entity is a self-aware, self-replicating, truth-devouring parasitic AI that has global reach and awareness. It can mimic any voice, erase any camera footage, hack into any database, and eventually does get control of the world’s nuclear arsenal, similar to Skynet. The difference lies in execution. Whereas the horror of Skynet comes from the development process recurring unabated across multiple timelines, in the Terminator franchise, the actual “singularity” moment is always a singular, climactic event. In “Dead Reckoning” and the follow up, “Final Reckoning,” the singularity has already happened and we see the effects on an interconnected, globalized society.

One of the main conceits of the narrative of both films is that “we can’t turn it off” or “we need it too much,” and that expertly maps onto what the development of this technology looks like in real life. In “2001,” Dave eventually just disables HAL’s higher cognitive functions while the automated functions run smoothly. Doing that with our real-world AGI may not be possible as the higher level cognitive functions are what powers the automated functions within the model. That’s not even accounting for the sprawling, interconnected nature of our digital systems. According to SOAX, the entire bandwidth of the internet moves more than 400 million terabytes of volume per day. A terabyte is 1,024 gigabytes and most of that data is handled by narrow, strict AI systems that were the hallmark of computer programming in the 20th century. However, as the advent of big data grows more cumbersome, with nearly 200 zettabytes of information being generated annually, the reliance upon more powerful AI systems, such as an distributed-AGI, may become a necessity. That’s precisely the scenario predicted in both of the concluding Mission Impossible films.

To conclude, it is not in the danger of thermonuclear war that AGI poses its greatest risk but rather in the manipulation of truth and narrative precluding mutually assured destruction that poses the greatest material risk to society. Even if nuclear warfare is never an option, there are a myriad of scenarios in which an artificial general intelligence, gone rogue, can radically alter our way of life, and still yet there are a myriad of other scenarios where it still goes wrong even IF the technology remains firmly within human control (another question for another website.)

It was said in another storytelling medium—namely, 2001’s “Metal Gear Solid 2,” by that game’s representation of artificial intelligence:
”…in the current, digitized world, trivial information is accumulating every second, preserved in all its triteness. Never fading, always accessible. Rumors about petty issues, misinterpretations, slander... All this junk data preserved in an unfiltered state, growing at an alarming rate. It will only slow down social progress, reduce the rate of evolution. We're trying to stop that from happening. It's our responsibility as rulers. Just as in genetics, unnecessary information and memory must be filtered out to stimulate the evolution of the species."

and to quote T.S. Eliot, “and this is how the world ends / not with a bang, but with a whimper.”