Alignment Research Center (ARC), Artificial Intelligence (AI), Contest, Eliciting Latent Knowledge (ELK), Eliezer Yudkowsky, Nerds
Feeling nerdy and intellectually arrogant? Need a job?
You can win a prize of between $5000 and $50,000 and a job offer from Alignment Research Center if you can slog your way through some of the worst prose ever written in English, that buries you in an avalanche of pretentious nerdspeak buzz words, heaps and piles of “gradient descents” and “Bayes nets.”
Reading this horrible stuff is in itself a formidable, mind-stupefying task. Personally, I think a better contest challenge would have consisted of asking people to edit this incredible meandering confused textual Odyssey into concise, intelligible standard English, but that’s just me.
As far as I can make out, you are supposed to imagine that we’ve got an AI responsible for guarding a diamond, the AI has a variety of defenses, there’s a burglar after the diamond, and cameras doing surveillance of the site, and you are supposed to figure out how to incentivize that AI to tell the truth about whether or not the burglar got past its defenses and got the diamond.
The nerdocracy frames the contest in terms of rewarding the AI more for telling the truth and rewarding the AI less (why reward it at all in this case?) for lying.
What’s going on here is referred to as an exercise in “eliciting latent knowledge (ELK).”
Astral Codex Ten brought this ?intriguing? contest to my attention.
Alignment Research Center (ARC)’s contest rap (in prose Georg Wilhelm Friederich Hegel might envy) is here. (Note that it’s somehow connected to Supernerd and undoubted genius Eliezer Yudkowsky!) If you can read this stuff without references to matters Bayesian causing you to reach for your revolver, you’re a better man than I am, Gungha Din!
Some effort at elucidation (and encouragement) can be found at the Effective Altruism Forum.