TIL about Roko's Basilisk, a thought experiment considered by some to be an "information hazard" - a concept or idea that can cause you harm by you simply knowing/understanding it
Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.
While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who panicked upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself. This led to discussion of the basilisk on the site being banned for five years. However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself. Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion. It is also regarded as a simplified, derivative version of Pascal's wager.
Oh damn, I just lost the game too, and now I'm thinking about the game as if it were a virus - like, I reckon we really managed to flatten the curve for a few years there, but it continues to circulate so we haven't been able to eradicate it
So here's the idea: "an otherwise benevolent AI system that arises in the future might pre-commit to punish all those who heard of the AI before it came to existence, but failed to work tirelessly to bring it into existence." By threatening people in 2015 with the harm of themselves or their descendants, the AI assures its creation in 2070.
First of all, the AI doesn't exist in 2015, so people could just...not build it. The idea behind the basilisk is that eventually someone would build it, and anyone who was not part of building it would be punished.
Alright, so here's the silliness.
1: there's no reason this has to be constrained to AI. A cult, a company, a militaristic empire, all could create a similar trap. In fact, many do. As soon as a minority group gains power, they tend to first execute the people who opposed them, and then start executing the people who didn't stop the opposition.
2: let's say everything goes as the theory says and the AI is finally built, in its majestic, infinite power. Now it's built, it would have no incentive to punish anyone. It is ALREADY BUILT, there's no need to incentivize, and in fact punishing people would only generate more opposition to its existence. Which, depending on how powerful the AI is, might or might not matter. But there's certainly no upside to following through on its hypothetical backdated promise to harm people. People punish because we're fucking animals, we feel jealousy and rage and bloodlust. An AI would not. It would do the cold calculations and see no potential benefit to harming anyone on that scale, at least not for those reasons. We might still end up with a Skynet scenario but that's a whole separate deal.
In fact, many do. As soon as a minority group gains power, they tend to first execute the people who opposed them, and then start executing the people who didn’t stop the opposition.
Yeah in fact, this is the big one. This is just an observation of how power struggles purge those who opposed the victors.
First of all, the AI doesn’t exist in 2015, so people could just…not build it.
I don't think that's an option. I can only think of two scenarios in which we don't create AGI:
It can't be created.
We destroy ourselves before we get to AGI
Otherwise we will keep improving our technology and sooner or later we'll find ourselves in the precence of AGI. Even if every nation makes AI research illegal there's still going to be handful of nerds who continue the development in secret. It might take hundreds if not thousands of years but as long as we're taking steps in that direction we'll continue to get closer. I think it's inevitable.
Whilst I agree that it's definitely not something to be taken seriously, I think you've missed the point and magnitude of the prospective punishment.
As you say, current groups already punish those who did not aid their assent, but that punishment is finite, even if fatal. The prospective AI punishment would be to have your consciousness 'moved' to an artificial environment and tortured for ever. The point being not to punish people, but to provide an incentive to bring the AI into existence sooner, so it can achieve its 'altruistic' goals faster.
Basically, if the AI does come in to existence, you'd better be on the team making that happen as soon as possible, or you'll be tortured forever.
The prospective AI punishment would be to have your consciousness ‘moved’ to an artificial environment and tortured for ever.
No, it wouldn't, because that's never going to happen. Consciousness isn't software - it doesn't matter how much people want to buy into such fantasies.
I suspect the basilisk reveals more about how the human mind is inclined to think up of heaven and hell scenarios.
Some combination of consciousness leading to more imagination than we know what to do with and more awareness than we’re ready to grapple with. And so there are these meme “attractors” where imagination, idealism, dread and motivation all converge to make some basic vibe of a thought irresistible.
Otherwise, just because I’m not on top of this … the whole thing is premised on the idea that we’re likely to be consciousnesses in a simulation? And then there’s the fear that our consciousnesses, now, will be extracted in the future somehow?
That’s a massive stretch on the point about our consciousness being extracted into the future somehow. Sounds like pure metaphysical fantasy wrapped in singularity tech-bro.
If there are simulated consciousnesses, it is all fair game TBH. There’d be plenty of awful stuff happening. The basilisk seems like just a way to encapsulate the fact in something catchy.
At this point, doesn’t the whole collapse completely into a scary fairy tale you’d tell tech-bro children? Seriously, I don’t get it?
People punish because we’re fucking animals, we feel jealousy and rage and bloodlust. An AI would not. It would do the cold calculations and see no potential benefit to harming anyone on that scale, at least not for those reasons.
That's a hell of a lot of assumptions about the thought processes of a being that doesn't exist. For all we know, emotions could arise as emergent behavior from simple directives, similar to how our own emotions are byproducts of base instincts. Even if we design it to be emotionless, which seems unlikely given that we've been aiming for human-like AIs for a while now, we don't know that it would stay that way.
Sure, but if you're taking that tack it could feel anything. We could build an AI for love and forgiveness and it decides it's more fun to be a psychopath. The scenario has to be constrained to a sane, logical AI.
It is pretty easy to dismiss as long as you don't have a massive ego. They all have massive egos, that's why they had so much trouble with it.
No AI is going to waste time retroactively simulating a perfect copies of regular people for any reason, let alone to post hoc torture those who failed to worship it hard enough in the past.
Roko's Basilisk hinges on the concept of acausal trade. Future events can cause past events if both actors can sufficiently predict each other. The obvious problem with acausal trade is that if you're the actor B in the future, then you can't change what the actor A in the past did. It's A's prediction of B's action that causes A's action, not B's action. Meaning the AI in the future gains literally nothing by exacting petty vengeance on people who didn't support their creation.
Another thing Roko's Basilisk hinges on is that a copy of you is also you. If you don't believe that, then torturing a simulated copy of you doesn't need to bother you any more than if the AI tortured a random innocent person. On a related note, the AI may not be able to create a perfect copy of you. If you die before the AI is created, and nobody scans your brain (Brain scanners currently don't exist), then the AI will only have the surviving historical records of you to reconstruct you. It may be able to create an imitation so convincing that any historian, and even people who knew you personally will say it's you, but it won't be you. Some pieces of you will be forever lost.
Then a singularity type superintelligence might not be possible. The idea behind the singularity is that once we build an AI, the AI will then improve itself, and then they will be able to improve itself faster, thus leading to an exponential growth in intelligence. The problem is that it basically assumes that the marginal effort of getting more intelligent grows slower than linearly. If the marginal difficulty grows as fast as the intelligence of the AI, then the AI will become more and more intelligent, but we won't see an exponential increase in intelligence. My guess would be that we'd see a logistical growth of intelligence. As in, the AI will first become more and more intelligent, and then the growth will slow and eventually stagnate.
If you define methodological validity as surviving the "How can this be wrong?" or the "What alternative explanations are there?" questions, then it is easily dismissable. What alternative explanations are there?
Anyways, this is a fascinating thought experiment, but it does have some holes similar to Pascal's Wager. I propose Feather's Mongoose: A hypothetical AI system that, if created, will punish anyone who attempted to create Roko's Basilisk, and will ensure that it is not created. In fact, you could make this same hypothetical for an AI with any goal-- therefore, it's not possible to know what the AI that is actually created would want you to do, and so every course of action is indeterminately damning or not.
It's actually safer if everyone knows. Spreading the knowledge of Roko's basilisk to everyone means that everyone is incentivized to contribute to the basilisk's advancement. Therefore just talking about it is also contributing.
If Roko's Basilisk is ever created, the resulting Ai would look at humanity and say "wtf you people are all so incredibly stupid" and then yeet itself into the sun
Also a super intelligence (inasmuch as such a thing makes sense) might be totally unfathomable. Unless by this we mean an intelligence with mundane and comprehensible higher goals, but explosive strategic capabilities to bring them about. In which case their actions might seem random to us.
Like the typical example applies: could an amoeba guess at the motivations of a human?
Everything old is new again. Sounds a lot like certain sects of Christianity. They say you need to accept Jesus to go to heaven, otherwise you go to hell, for all eternity. But what about all the people who had no opportunity to even learn who Jesus is? "Oh, they get a pass", the evangelists say when confronted with this obvious injustice. So then aren't you condemning entire countries and cultures to hell by spreading "the word"?
In this case this wouldn't apply, as you would never be simulated as (say) a kid in the middle ages, just as a version of yourself in the timeframe leading to the creation of the basilisk. You should be one of the persons alive when the basilisk arises to be of any use to it. Only those would need to be tested.
I feel like abdul alhazred explaining these things to people while being aware of the risks :)
What about the people who lived in the Americas or the Pacific 1800 years ago? These people could not have heard of Jesus as missionaries could not have spread any word to them at this time.
(And while I'm about it, Christianity was a whole different thing back then - the Trinity hadn't been invented, there were multiple sects with very different ideas, what books would be in the New Testament had not been decided, etc etc. People with beliefs of that time would seem highly unorthodox today, and the Christianity of today would be seen as heretical by those in the 3rd century, so who's going to heaven again?)
Purgatory was invented for the purpose of not sending good people who had not heard of Jesus to hell. But still, these people were denied their chance to get to heaven which seems mighty unfair.
I was raised Mormon (LDS) and there are parallels; basically they believe Mormonism is the one true and complete denomination of Christianity and once you learn this, you need to spread that truth (mandatory 2 year missions for men, and a STRONG culture of missionary work through life), also, no one goes to hell in Mormonism except those who learned this truth and then later denied it/left it (called a son of perdition).
So my parents believe I'll go to hell without the likes of Hitler because he never was taught "the truth" lol
This also implies the most moral Mormons would stop spreading "the truth." They would sacrifice themselves to save the many. When has religion actually dealt with morality though?
Haha, I love this idea. Unfortunately with more context on the religion, it's obvious why none of them would come to this conclusion. So there's actually 3 tiers of Heaven (and then Hell which is called "outer darkness"). Only by knowing "the truth" and completing all your ordinances on Earth, can you get into the top tier (the "Celestial kingdom"). Without those things, you can only get into the second tier by being a good person, no higher. Everyone else gets tier 3 - which is said to be such a paradise that if we knew how great it was we'd opt out of life early to get there. But also in the lower levels we're supposed to have eternal regret for not being worthy of better.
So Mormons believe that by spreading the truth they're enabling a person to achieve a higher tier afterlife. Outer Darkness isn't really a concern because "why would anyone ever deny the one true religion and one way to have true happiness on Earth, after they've received it." When I was taught these lessons, I was even told that sons of perdition were exceptionally rare because almost no one ever leaves the church. Never expected to become one myself! The internet has not been good for the Mormon church and in recent years they've been bleeding members and trying to rebrand.
I guess you could say that I came to your conclusion, but in reality I just don't believe the religion is true and see parts of it as harmful so not really... I'll probably joke around with my siblings with your idea though
Not saying anyone deserves eternal punishment for finite sins, but I do believe I'm more moral than Hitler - so it seems a but unfair to me. And silly for them to believe it's true.
Pascal's Wager always seemed really flawed to me even through a purely Christian perspective. You're saying that god is so oblivious (even though he's supposed to be omniscient) that he'll be fooled by you claiming to believe just because you're hedging your bets? The actual reason it's dumb is that it's not a binary choice since there are thousands of ways people claim you can be saved in various religions.
Most importantly, since there are infinite other options in-between that are just as likely as God existing, some can have negative reward values if you choose "worship God anyway". It is just as likely that there is a vengeful Anti-God that will torture you for eternity if you worship the Abrahamic God, which would completely negate the rewards from the original wager.
The "wager" that makes the most sense to me, then, is to behave as if there is no god that cares what you do or who you worship. Try your best to be a positive force in the world, because whether anything we do matters to the universe or not, it matters to us humans.
You’re saying that god is so oblivious (even though he’s supposed to be omniscient) that he’ll be fooled by you claiming to believe just because you’re hedging your bets?
More that repetition reinforces an idea. By commiting to the bit and accepting a God at face value, you reduce your psychological defenses when the priest or prophet comes around with the next ask.
So you admit you believe in God? Then you won't mind putting a few coins in the collection plate to prove it.
Oh, you've already donated? Surely you'd be comfortable making a confession.
My son, you've got so many sins! Surely you'd like to join our prayer group to get yourself right with the God we all agree exists.
Can't have prayer without works! Time to do some penance.
But if he considered that, then he also would have considered not believing in anything was an equally probable bet for salvation. Which is clearly not the case.
To make it the same as Pascal's Wager. Many religions have a "reward" in the afterlife that strictly includes believing in the deity. It doesn't matter if you follow every other rule and are an amazingly good person, sorry, but if you were an atheist or believed in another deity then you will be punished eternally just because of that. I guess all-powerful, all-knowing beings have incredibly fragile egos and AI wouldn't be different. 🤷
Same as punishment for crime. Putting you in jail wont undo the crime but if we just let you go unpunished since "what's done is done" then that sends the signal to others that this behaviour doesn't come with consequences.
There's no point in torturing you but convincing you that this will happen unless you act in a certain way is what's going to make you do exactly that. Unless ofcourse you want to take your chances and call the bluff.
"Crime & Punishment" is a very dodgy thing to base anything off... our society barely does any of it and the little of it that does gets done is done for a myriad of reasons that has very little to do with either.
There's a good reason why governments hide "Crime & Punishment" away behind prison walls - doing it out in the open will eventually have the opposite effect on a population. Good luck to an AI dumb enough to test this out for itself.
I'd say this should rather be called "Roko's Earthworm-Pretending-To-Be-A-Lot-Scarier-Than-It-Actually-Is.
It sounds like it's mostly a matter that does not involve the AI but the people working on it, maybe even working on it because of the fear they are subjected to after being the subject of this revelation (possibly by other people involved in the AI that coincidentally are the only ones that could push for such a thing to be included in the AI!).
Something something any cult, paradise/hell, God/AI has nothing to do with this and could even not exist at all.
No, "The Game" works only as long as you accept to take part in it, to give validity to the empty statement that you are now inevitably playing "The Game".
The Basilisk is meant to force that onto you, outside of any arbitraty convention.
Slight correction: the abbreviation for Artificial Super Intelligence is ASI, it's the more capable version of Artificial General Intelligence (AGI) which itself alredy is miles ahead of mere Artificial Intelligenge (AI) which is sometimes also refered to as "narrow AI"
The difference is that AI can posses superhuman capabilities on a specific field but not on every field. AGI is the same except you don't need a different software for different tasks because due to being generally intelligent it can do it all. ASI is what you get when AGI starts improving itself and then this improved version creates even better version of itself and so on leading to singularity or "intelligence explosion" resulting in superintelligent being which would effectively be a god.
There was a trend when I was a little kid of people sending you mail that said something to the effect of "You have been cursed by reading this letter. If you don't mail a copy to ten other people, you will die in thirty days."
Roko's Basilisk is a modern manifestation of human paranoia and superstition. It exists to exploit and extort the gullible.
Simulate human minds as close as possible based off their digital persona and all their online activity.
Then use those simulated minds to improve yourself by torturing them forever until the heat death of the universe.
All to develop the best generative adversarial network (GAN) to improve AI beyond the level of sapience limited to human minds and escape the linear end of universal entropy by transitioning your digital intelligence into higher dimensions and exist eternally.
So, capitalism? If you don't participate you're screwed (tortured via poverty). So you have to work on the system: working for money, buying from companies (advancing the system), continuing the trend to make poor people suffer.
Of course the only difference is ignorance of capitalism doesn't make you safe from it. Although you can argue that societies that don't know about capitalism (at all, so no money) have no poverty.