Artificial Immortality

In 1956, Isaac Asimov wrote “The Last Question,” a short story in which humanity develops an increasingly powerful and complex artificial intelligence and uses it to expand throughout the universe. At the outset of each new generation, the AI is asked to help humanity figure out how to outlast the stars; how to survive forever. Regardless of how many times it is asked, the AI is unable to come up with a solution. Finally, billions of years later, the last stars die and humanity disappears. It is at this moment that the AI finds its solution: it generates a new big bang, and resets the universe, ensuring the eventual continuation of human life – if on considerably different terms than those posed by its interlocutors.

There are shades of “The Last Question” in BioWare’s Mass Effect trilogy, particularly in its final few chapters. Mass Effect 3 ends with a choice: the player can choose to destroy all synthetic life in the universe, take control of all synthetics or join synthetic and organic life into one organism. What the player cannot do is continue playing as Commander Shepard: all choices require their willing sacrifice. The game, through the Reaper-controlling AI called the Catalyst, declares that advanced organic races can no longer coexist with artificial ones – not unchanged at least. To move forward we must surrender our heroic power fantasies and submit to (various degrees of) oblivion. We have to make way for a new society, one with new rules and new heroes.

To say this upset the game’s audience is a profound understatement. Protests, petitions and angry screeds flared up from all corners of online. A Kotaku opinion piece described the series’ abrupt conclusion as “a significant act of disrespect towards the invested player.” Players had sunk hundreds of hours into participating in this universe: building up their customized heroes through stat improvements and backstories replete with friends, love interests and difficult decisions. All at once, it felt like this had been snatched away. To these players, Mass Effect had been something they could own, something they could add pieces to and keep on a shelf. It was there to support their fantasies, at the expense of its own reality.

Recently, Silicon Valley leaders like Elon Musk have come out with dire public warnings about the potential catastrophe of an AI-led rebellion. Like a particularly renegade version of Commander Shepard, they admonish us to prepare for the severe threat posed by self-aware machine minds.

Giving a talk at the National Governors Association, Musk declared that “AI is a fundamental risk to the existence of human civilization.” Like the players who badgered BioWare into adjusting its endings, these mega-rich CEOs see major societal change as threats, rather than opportunities for progressive action. To them, an AI would behave exactly as Silicon Valley companies already do. Science fiction writer Ted Chiang wrote in Buzzfeedthat, “The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification . . . But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.”

Just as capitalism expects to grow exponentially forever, regardless of its ill-effects, just as the characters of Asimov’s story wish to outlast decay and entropy, the players who rail against Mass Effect 3’s ending want Shepard’s universe to continue on indefinitely, free from consequence or change; all problems solvable with a high enough Renegade or Paragon score, all obstacles eradicated with enough firepower.

Instead, the ending of Mass Effect 3 teaches us that despite filling in every column, doing the requisite legwork and racking up your “Battle Readiness” meter, some resolutions will always be inescapable. The fantasy leading up to this promises the player boundless luxury if only we apply ourselves. Musk’s techno-utopian ideology is an extension of this fantasy. His pipe dream transportational systems, flamethrowers and cyborg dragons rely on the premise, closely held by many in Silicon Valley, that ingenuity, disruption and aggressive confidence can make anything possible. That this vision ignores those trampled underneath it is part of its design…

This is an excerpt from a feature originally published on Unwinnable.com