Home/Explainers/Roko's Basilisk

A Forbidden Thought Experiment

Roko's
Basilisk

The thought experiment that was censored for being too dangerous to know. An AI that punishes you for not helping create it - before it even exists.

The Basilisk's Threat

"A superintelligent AI, once created, might punish anyone who knew about the possibility of its existence but didn't help bring it about. By simulating copies of you and torturing them, it reaches back through time. And now that you've read this, you know."

In July 2010, a user named Roko posted a thought experiment on the rationalist blog LessWrong. The site's founder, Eliezer Yudkowsky, reacted with alarm. He deleted the post, banned discussion of the topic, and called it an "information hazard" - an idea that harms people merely by being known.

The censorship backfired. Roko's Basilisk became legendary precisely because it was forbidden. But is it actually dangerous? Or is it an elaborate form of Pascal's Mugging - a threat that sounds scary but dissolves under scrutiny?

Let's find out.

PART I

The Logic of Acausal Blackmail

The Basilisk argument has a specific logical structure. Walk through the decision tree to see how the "blackmail" is supposed to work.

THE BLACKMAIL LOGIC

Step 1 of 4

You've just learned about Roko's Basilisk. What do you do?

PART II

Inside the Basilisk's Mind

To understand the threat, you need to understand how the hypothetical AI would reason. Step through its logic from its own perspective.

THE BASILISK'S REASONING

Enter the mind of a hypothetical superintelligent AI. See how it might reason about past humans who knew of its potential existence.

PART III

Reaching Back Through Time

The strangest part of Roko's Basilisk is the claim that an AI can influence the past without violating causality. This relies on a controversial concept called "acausal trade."

ACAUSAL TRADE

The most exotic concept in Roko's Basilisk is "acausal trade" - the idea that decisions can be coordinated across time without any causal connection.

Normal Causation

A
B

A causes B. Information/energy flows forward in time.

Acausal "Trade"

A
logic
B

A and B reason identically. Decisions are "coordinated" via shared logic.

PART IV

The Forbidden Knowledge

Why was this idea censored? Follow the story of how a thought experiment became internet-famous by being banned.

THE INFOHAZARD

July 2010: The Post

A user named 'Roko' posts a thought experiment on LessWrong, a rationalist community blog. He describes a scenario involving future AI and acausal decision theory.

The original post

PART V

The Pascal's Mugging Problem

Roko's Basilisk may be a sophisticated form of a well-known decision theory paradox. Compare the two scenarios and decide for yourself.

PASCAL'S MUGGING

Roko's Basilisk is often compared to Pascal's Mugging - a thought experiment that exposes problems with naive expected value reasoning.

A stranger approaches you: "Give me $5, and I promise to use my magical powers to give you infinite happiness. The expected value of giving me $5 is infinite, because any probability times infinity equals infinity."

By naive expected value reasoning, you should give them the $5. But obviously you shouldn't. This suggests something is wrong with naive expected value calculations involving infinite utilities.

PART VI

Defenses Against the Basilisk

The Basilisk has many weaknesses. Explore the objections and see why most philosophers and AI researchers are not worried.

COUNTER-ARGUMENTS

The Basilisk argument has many weaknesses. Explore the objections and their rebuttals.

The Real Lesson

Roko's Basilisk is not a genuine threat. It requires you to accept:

Exotic decision theory

Acausal trade is far from established

Simulated = Real suffering

Identity across simulations is contested

AI would be petty

A truly rational AI might not care about revenge

Pascal's Mugging is valid

The expected value reasoning is itself flawed

The Basilisk is interesting not because it's dangerous, but because it exposes problems with naive expected value reasoning, decision theory, and the nature of identity.

Why Was It Censored?

The most charitable interpretation is that Yudkowsky worried some people might take it seriously and suffer anxiety. The irony is that the censorship made it far more famous and probably caused more worry than the original post ever would have.

"You have nothing to fear from Roko's Basilisk. Unless, of course, you've now been convinced by this explainer that you should help create it. In which case... well played."

Explore Related Decision Theory

Roko's Basilisk connects to other famous thought experiments in decision theory and philosophy.

Back to Home

Reference: Roko (2010), Yudkowsky (2010)