Category Archive 'Eliezer Yudkowsky'

17 Jan 2022

Win $50K and a Tech Job

, , , , ,

Feeling nerdy and intellectually arrogant? Need a job?

You can win a prize of between $5000 and $50,000 and a job offer from Alignment Research Center if you can slog your way through some of the worst prose ever written in English, that buries you in an avalanche of pretentious nerdspeak buzz words, heaps and piles of “gradient descents” and “Bayes nets.”

Reading this horrible stuff is in itself a formidable, mind-stupefying task. Personally, I think a better contest challenge would have consisted of asking people to edit this incredible meandering confused textual Odyssey into concise, intelligible standard English, but that’s just me.

As far as I can make out, you are supposed to imagine that we’ve got an AI responsible for guarding a diamond, the AI has a variety of defenses, there’s a burglar after the diamond, and cameras doing surveillance of the site, and you are supposed to figure out how to incentivize that AI to tell the truth about whether or not the burglar got past its defenses and got the diamond.

The nerdocracy frames the contest in terms of rewarding the AI more for telling the truth and rewarding the AI less (why reward it at all in this case?) for lying.

What’s going on here is referred to as an exercise in “eliciting latent knowledge (ELK).”

Astral Codex Ten brought this ?intriguing? contest to my attention.

Alignment Research Center (ARC)’s contest rap (in prose Georg Wilhelm Friederich Hegel might envy) is here. (Note that it’s somehow connected to Supernerd and undoubted genius Eliezer Yudkowsky!) If you can read this stuff without references to matters Bayesian causing you to reach for your revolver, you’re a better man than I am, Gungha Din!

Some effort at elucidation (and encouragement) can be found at the Effective Altruism Forum.

08 Apr 2017

Roko’s Basilisk or The Most Terrifying Thought Experiment of All Time

, , ,

On July 23, 2010, Roko, a user of Eliezer Yudkowsky’s online forum LessWrong put up a post that Futurists are still worrying about. Shuja Haider, in Viewpoint magazine, looks at this discussion among libertarian geeks from a horrified Marxist perspective.

If the builders of technology are transmitting their values into machinery, this makes the culture of Silicon Valley a matter of more widespread consequence. The Californian Ideology, famously identified by Richard Barbrook and Andy Cameron in 1995, represented a synthesis of apparent opposites: on one hand, the New Left utopianism that was handily recuperated into the Third Way liberal centrism of the 1990s, and on the other, the Ayn Randian individualism that led more or less directly to the financial crisis of the 2000s.

But in the decades since, as the consumer-oriented liberalism of Bill Gates and Steve Jobs gave way to the technological authoritarianism of Elon Musk and Peter Thiel, this strange foundation paved the way for even stranger tendencies. The strangest of these is known as “neoreaction,” or, in a distorted echo of Eliezer Yudkowsky’s vision, the “Dark Enlightenment.” It emerged from the same chaotic process that yielded the anarchic political collective Anonymous, a product of the hivemind generated by the cybernetic assemblages of social media. More than a school of thought, it resembles a meme. The genealogy of this new intellectual current is refracted in the mirror of the most dangerous meme ever created: Roko’s Basilisk.
The Simulated Afterlife

The primordial soup that led to the Basilisk’s genesis is transhumanism, the discourse of Singularity as personal narrative. For some of its advocates, most famously Silicon Valley icon Ray Kurzweil, the animating desire of building machine intelligence is apparently apolitical. It is the ancient fool’s errand, most famously enacted in the legend of the fountain of youth: the desire to eliminate mortality. If we can bring a machine to life, we should be able to bring someone who has died back to life. We will accomplish this by inputting information about that person into a program, which will then run a simulation of that person so accurate it will be indistinguishable from the original. In anticipation of this eventuality, Kurzweil keeps a storage unit full of his father’s old possessions, whom he intends to resurrect by means of feeding information into a superintelligent computer.

If you were to be duplicated in an exact replica, including not just all of your bodily characteristics, but every one of the thoughts and memories that has been physically engraved onto your brain, would that replica be you? This is a problem that troubles both philosophers and scientists, but not Ray Kurzweil. “It would be more like my father than my father would be, were he to live,” he told ABC News.

Hedging his bets, Kurzweil himself fends off the threat of expiration by taking hundreds of nutritional supplements a day and receiving weekly vitamin injections. In order to make it to the year he predicts the Singularity will take place, he will have to live until 2045, when he will be 97. Kurzweil is controversial even among those who share his outlook, but it’s a widespread assumption among Singularitarians that death is not the end.

Unfortunately, Roko discovered a drawback to superintelligent resurrection. His post speculated that once the AI comes into being, it might develop a survival instinct that it will apply retroactively. It will want to hasten its own birth by requisitioning human history to work towards its creation. In order to do this, it will institute an incentive that dictates how you will be treated after you come back to life. Those of us who know about this incentive program — and I’m sorry to say that this now includes you — will be required to dedicate our lives to building the superintelligent computer.

Roko gave the example of Elon Musk as someone who has the resources and the motivation to make a worthy contribution, and will be duly rewarded. As for the rest of us, if we don’t find a way to follow through, the AI will resurrect us via simulation and proceed to torture us for all eternity.

This is a simplification of Roko’s post, and if you don’t understand Bayesian decision theory, it may seem too silly to worry about. But among the rationalists of LessWrong, it caused panic, outrage, and “terrible nightmares.”
Between Fiction and Technology

Yudkowsy responded to Roko’s post the next day. “Listen to me very closely, you idiot,” he began, before switching to all caps and aggressively debunking Roko’s mathematics. He concluded with a parenthetical:

    For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.

The name “Roko’s Basilisk” caught on during the ensuing discussion, in reference to a mythical creature that would kill you if you caught a glimpse of it. This wasn’t evocative enough for Yudkowsky. He began referring to it as “Babyfucker,” to ensure suitable revulsion, and compared it to H.P. Lovecraft’s Necronomicon, a book in the horror writer’s fictional universe so disturbing it drove its readers insane.

Yudkowsky’s point was that the incentive couldn’t have existed until someone brought it up. Roko gave the not-yet-existing AI the idea, because the post will now be available in the archive of information it will draw its knowledge from. At another level of complexity, by telling us about the idea, Roko implicated us in the Basilisk’s ultimatum. Now that we know the superintelligence is giving us the choice between slave labor and eternal torment, we are forced to choose. We are condemned by our awareness. Roko fucked us over forever.

—————————–

A less ideological and less hysterical analysis was provided in 2014, in Slate, by David Auerbach:

LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

    Listen to me very closely, you idiot.

    YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

    You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

    This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

14 Mar 2015

HPMOR Concludes Today

, , ,

HPMOR2

Today, March 14 is Pi Day, the date selected by uber-nerd Eliezer Yudkowsky aka Less Wrong for the release of the 122nd and final chapter of his widely-acclaimed Harry Potter and the Methods of Rationality.

Wrap parties celebrating the conclusion of what is, in the eyes of many readers, the greatest all-time performance by a piece of fan fiction will be taking place in Singapore, Bombay, Melbourne and Sydney, Cambridge, Berkeley, Mountain View, Brussels, London, and Berlin.

If we wanted to capture the ultimate Boone and Crockett Club record-book specimen of Nerdus Americanus to be mounted and displayed in a diorama in the Museum of Natural History, we’d be hunting for Eliezer Yudkowsky.

Yudkowsky is an autodidact who quit attending other people’s schools after 8th grade. He nonetheless is pretty successful. He co-founded his own university, the Singularity Institute, and he hobnobs with and advises billionaire capitalist Peter Thiel on change-the-world tech projects.

HPMOR differs from J.K. Rowling’s original in its ruthless consistency. All the background sob story is removed. Harry grows up happily and without being neglected and abused. Neither is Harry is humanized. Harry is not unhappy or insecure. Harry is one of us, a gifted intellectual and thoroughgoing rationalist of keen scientific bent, skeptical of authority and completely self-confident. He knows he’s smarter than everybody else. This Harry is not willing to accept Magic as traditionally taught at Hogwarts. This Harry intends to understand Magic in the light of Muggle science.

As HPMOR develops, Yudkowsky follows the pattern of the early portions of the J.K. Rowling original, but he continually revises. The reader looks on in admiration, noting with astonishment that Yudkowsky is certainly right. Again and again, he successfully improves upon the original.

It’s a great achievement, but it did not go entirely smoothly. Writing HPMOR took years and years, and Yudkowsky found himself distracted from continuing by his own reviews. He bogged down about halfway through, and production slowed to a trickle. He made promises of completion, which he broke. His readers have been sitting around, tapping their feet impatiently, for all of last year.

Today, at last, it’s all finished. Yudkowsky will be releasing his final chapter.

It’s going to be interesting to see what he writes next.

————————–

Facts About Eliezer Yudkowsky:

Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
Inside Eliezer Yudkowsky’s pineal gland is not an immortal soul, but another brain.
Eliezer Yudkowsky’s favorite food is printouts of Rice’s theorem.
Eliezer Yudkowsky’s favorite fighting technique is a roundhouse dustspeck to the face.
Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
Eliezer Yudkowsky once held up a sheet of paper and said, “A blank map does not correspond to a blank territory”. It was thus that the universe was created. …

HPMOR1


Your are browsing
the Archives of Never Yet Melted in the 'Eliezer Yudkowsky' Category.
/div>








Feeds
Entries (RSS)
Comments (RSS)
Feed Shark