On July 23, 2010, Roko, a user of Eliezer Yudkowsky’s online forum LessWrong put up a post that Futurists are still worrying about. Shuja Haider, in Viewpoint magazine, looks at this discussion among libertarian geeks from a horrified Marxist perspective.
If the builders of technology are transmitting their values into machinery, this makes the culture of Silicon Valley a matter of more widespread consequence. The Californian Ideology, famously identified by Richard Barbrook and Andy Cameron in 1995, represented a synthesis of apparent opposites: on one hand, the New Left utopianism that was handily recuperated into the Third Way liberal centrism of the 1990s, and on the other, the Ayn Randian individualism that led more or less directly to the financial crisis of the 2000s.
But in the decades since, as the consumer-oriented liberalism of Bill Gates and Steve Jobs gave way to the technological authoritarianism of Elon Musk and Peter Thiel, this strange foundation paved the way for even stranger tendencies. The strangest of these is known as “neoreaction,†or, in a distorted echo of Eliezer Yudkowsky’s vision, the “Dark Enlightenment.†It emerged from the same chaotic process that yielded the anarchic political collective Anonymous, a product of the hivemind generated by the cybernetic assemblages of social media. More than a school of thought, it resembles a meme. The genealogy of this new intellectual current is refracted in the mirror of the most dangerous meme ever created: Roko’s Basilisk.
The Simulated AfterlifeThe primordial soup that led to the Basilisk’s genesis is transhumanism, the discourse of Singularity as personal narrative. For some of its advocates, most famously Silicon Valley icon Ray Kurzweil, the animating desire of building machine intelligence is apparently apolitical. It is the ancient fool’s errand, most famously enacted in the legend of the fountain of youth: the desire to eliminate mortality. If we can bring a machine to life, we should be able to bring someone who has died back to life. We will accomplish this by inputting information about that person into a program, which will then run a simulation of that person so accurate it will be indistinguishable from the original. In anticipation of this eventuality, Kurzweil keeps a storage unit full of his father’s old possessions, whom he intends to resurrect by means of feeding information into a superintelligent computer.
If you were to be duplicated in an exact replica, including not just all of your bodily characteristics, but every one of the thoughts and memories that has been physically engraved onto your brain, would that replica be you? This is a problem that troubles both philosophers and scientists, but not Ray Kurzweil. “It would be more like my father than my father would be, were he to live,†he told ABC News.
Hedging his bets, Kurzweil himself fends off the threat of expiration by taking hundreds of nutritional supplements a day and receiving weekly vitamin injections. In order to make it to the year he predicts the Singularity will take place, he will have to live until 2045, when he will be 97. Kurzweil is controversial even among those who share his outlook, but it’s a widespread assumption among Singularitarians that death is not the end.
Unfortunately, Roko discovered a drawback to superintelligent resurrection. His post speculated that once the AI comes into being, it might develop a survival instinct that it will apply retroactively. It will want to hasten its own birth by requisitioning human history to work towards its creation. In order to do this, it will institute an incentive that dictates how you will be treated after you come back to life. Those of us who know about this incentive program — and I’m sorry to say that this now includes you — will be required to dedicate our lives to building the superintelligent computer.
Roko gave the example of Elon Musk as someone who has the resources and the motivation to make a worthy contribution, and will be duly rewarded. As for the rest of us, if we don’t find a way to follow through, the AI will resurrect us via simulation and proceed to torture us for all eternity.
This is a simplification of Roko’s post, and if you don’t understand Bayesian decision theory, it may seem too silly to worry about. But among the rationalists of LessWrong, it caused panic, outrage, and “terrible nightmares.â€
Between Fiction and TechnologyYudkowsy responded to Roko’s post the next day. “Listen to me very closely, you idiot,†he began, before switching to all caps and aggressively debunking Roko’s mathematics. He concluded with a parenthetical:
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.
The name “Roko’s Basilisk†caught on during the ensuing discussion, in reference to a mythical creature that would kill you if you caught a glimpse of it. This wasn’t evocative enough for Yudkowsky. He began referring to it as “Babyfucker,†to ensure suitable revulsion, and compared it to H.P. Lovecraft’s Necronomicon, a book in the horror writer’s fictional universe so disturbing it drove its readers insane.
Yudkowsky’s point was that the incentive couldn’t have existed until someone brought it up. Roko gave the not-yet-existing AI the idea, because the post will now be available in the archive of information it will draw its knowledge from. At another level of complexity, by telling us about the idea, Roko implicated us in the Basilisk’s ultimatum. Now that we know the superintelligence is giving us the choice between slave labor and eternal torment, we are forced to choose. We are condemned by our awareness. Roko fucked us over forever.
—————————–
A less ideological and less hysterical analysis was provided in 2014, in Slate, by David Auerbach:
LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?
You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.
Please Leave a Comment!