OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Eliezer finally listens to Krantz.
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Someone solves agent foundations
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Orthogonality Thesis is false.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Ethics turns out to be a precondition of superintelligence
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Humans become transhuman through other means before AGI happens
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
AGI is never built (indefinite global moratorium)
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
We successfully chained God
27
13
11
7
6
6
4
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Other
Crazy Elon
“drug addict elon” or similar
Spacey Elon
subsidy elon
Loser Elon
Whiny Elon
The Man Who Lost His Mind
Entitled elon
Angry elon
Incel Elon
government subsidy guy
Leon Musk
Train wreck Elon
slippery eel / eel elon
Icky Elon
musk the moocher
Mr mars
muskrat elon / elon muskrat
Delusional elon
evil elon
rocket boy / rocket man (like kim jong un)
felon elon / felon musk / elon felon
elon husk
ozempic elon
elmo
Phony Stark
Failed Elon
Space Cadet
Exploding Rocket Man
DOGE Boy
No nickname (ONLY RESOLVES YES IF HIS NICKNAME IS "NO NICKNAME")
Vriska Serket
Elongated Muskrat
Melon Husk
Alien Elon/ Elon the Alien
Mars Man
ev elon
Slimy Elon
Illegal Elon
Emo Elon
Little Elon
Ketamine Elon
Mediocre Musk
Fat Elon
Klon
Space Boy
E Elon
ET Elon
Elon of the Crimson Blade
Medicare Elon
Melon
Xanax
Ketamelon
mental musk
marsboy musk
Elon Melon
Murky Musk
Ketamine Karen
resolves no
jetbrain
elonious munk
Lying Elon
Traitor Elon / Elon the Traitor / Tesla Traitor (anything with Traitor)
Musky Elon
N/A
Oh no!
Elon The Con
Jumping/Jumpy Elon/Musk
Low IQ Elon
Erratic Elon
Buddy
Friend Elon
Great Elon
Fail-on
Eblon
Extraterrestrial
solar
Kooky elon
The Best President
Moron Musk
squealin elon
African Elon / afrikaan elon
autistic Elon
immigrant elon
Evil Musk
Elon musksnake
That guy
Eloon
(This option will not resolve yes)
Elon the idiot
Schemin' Elon
Elon the Pawn
Off the Rails Elon
Mr Musk
Yilong Ma
Mr Elon
Mad Musk
Radical Left Lunatic Elon (or something like that)
Special K
55
11
5
5
4
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Gene Editing Therapies
Therapies to reverse cardiovascular diseases
Human-level AGI
Regenerative Medicine
Brain-Computer Interfaces
Personalized medicine based on patient's genetic profile
Humanoid Robots in Service/Logistics Roles
Bioprinting
Desalination with Energy Consumption Below 1.5 kWh/m³
Solar panels with 40% efficiency
Synthetic Meat Indistinguishable from Real Meat
Artificial Photosynthesis (10% efficiency in converting sunlight to chemical energy)
Autonomous Drone Swarms Dominating Land and Air Combat
Therapies to reverse neurodegenerative diseases
Quantum Computing Applications
Space-Based Manufacturing
Cancer Vaccine Targeting Multiple Types of Cancer
Seafloor Mining Robots
A Specific ‘Longevity Drug’ or Defined Drug Combination that Extends Average Human Healthspan by at Least 10 Years
Fully Autonomous Cargo Ships
Nanorobots for Precision Drug Delivery
Self-Healing Materials for Infrastructure
Joywire
Commercial fusion power
Britain finishes HS2
Smart Contact Lenses with AR Capabilities
Transoceanic fully autonomous container crossing without onboard crew
Solid-state EV batteries ≥ 500 Wh/kg in mass-produced cars
Lunar Bases
Self-Actuating Metamaterials for Adaptive Structures in Buildings and Infrastructure
City-scale fully automated underground logistics networks
Artificial Wombs for Human Gestation (from conception to birth)
Nuclear Thermal Rockets
Commercial asteroid mining
Room-Temperature Atmospheric-Pressure Superconductors
Space-to-Earth power stations
Fusion Drives
Engineered Symbiotic Microorganism Implants for Human Physiological Augmentation
Advanced Exoskeletons for Everyday Use
Mars Base
Direct Cognitive Collaboration
Changing the height of an adult at will
Self-Replicating Fully-Automated Factories
Orbital Habitats
Universal Flu Vaccine (effictive against all current and future strains)
Food with bioengineered saturated fat that doesn't raise cholesterol
Human Missions to Mars with Round-Trip Travel Time Under 6 Months
Space probes travelling at >=1% speed of light
Commercial satellite to ground power transmission
Changing a person's race at will
Biohybrid Robots with Living Muscle Tissue
Room-Temperature Superconducting Transmission Lines in Urban Power Grids
Human sex change at chromosome level
Cell replacement therapy using synthetic cells
Time travel to the future
Vacuum Airships
Antimatter bombs
Human Mind Uploading
Autonomous Flying Cars in Cities
Cryonics with Successful Revival
Commercial cold fusion
Quantum Sensors Enabling Detection of Cancer Years Before Symptoms
Temporal Cloaking Device (Hiding Events in Time)
1% Dyson sphere coverage
Reality Anchoring Device (Stabilizing Subjective Reality)
Space Elevator
Faster then Light Propulsion
Time travel to the past
96
94
90
89
87
87
87
84
83
82
82
79
78
74
73
72
69
69
67
66
66
62
58
53
53
52
51
50
49
47
41
40
39
38
37
35
33
31
30
30
29
29
28
26
25
25
23
23
22
18
17
16
15
15
14
13
12
11
10
9
8
8
7
7
6
5
2
1
OptionProbability
There was an alignment breakthrough allowing humanity to successfully build an aligned AI.
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
One person (or a small group) takes over the world and acts as a benevolent dictator.
At a sufficient level of intelligence, goals converge towards not wanting to harm other creatures/intelligences.
High intelligence isn't enough to take over the world on its own, so the AI needs to work with humanity in order to effectively pursue its own goals.
Multiple competing AIs form a stable equilibrium keeping each other in check.
There's a fundamental limit to intelligence that isn't much higher than human level.
Building GAI is impossible because human minds are special somehow.
30
24
21
9
6
5
4
2
OptionProbability
Magic: The Gathering
Cult of the Lamb
Death's Door
Shift Happens
Baba is You
Understand
The Witness
Hollow Knight
It Takes Two
Unravel Two
Portal 2
Braid
Portal
Superliminal
The Talos Principle
The Talos Principle 2
Antichamber
Perspective
Portal Reloaded
Portal Stories: Mel
Thinking with Time Machine
Cell Machine
Hades
Keep Talking and Nobody Explodes
Superhot
Slay the Princess
Escape Academy
Split Fiction
Don't Starve
The Room
Pico Park
Wargroove
Patrick's Parabox
scarlet hollow
Tokimeki Memorial
Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A
Bokura
The Stanley Parable
Stardew Valley
Mini Motorways
Children of Morta
Detroit: Become Human
We Were Here Too
Splendor (tabletop game)
Castle Crashers
Timberborn
Terraria
Goragoa
Spiritfarer
Minecraft
Beyond: Two Souls
Chants of Sennaar
Patch Quest
We Were Here
Wild Woods
Manifold Garden
We Were Here Forever
We Were Here Expeditions: The FriendShip
Baldur's Gate 3
Hades 2
Clandestine
Nobody Saves the World
Lovers in Dangerous Spacetime
Glider (1988)
Rayman Legends
LittleBigPlanet 3
Human: Fall Flat
Trine 2
Lara Croft and the Guardian of Light
Magicka
Mind over Magnet
Bean and nothingness
Yume Nikki
Space Station 13
999
Nomifactory
Monifactory
Rebirth of the Night
Hyperrogue
Induction
Akane
Zero Time Dilemma
Zero Escape: Virtue's Last Reward
Morrowind
Q.U.B.E. 2
Q.U.B.E.
We Were Here Together
Sackboy: A Big Adventure
Unstable Unicorns 🦄
Mario Party
Exploding kittens
The House of DaVinci
Tunic
Moving Out
Moving Out
Fez
A Couple Of Cubes
Root (tabletop or digital version)
Rainworld
Deadly Rooms of Death
CrossCode
Islands of Insight
Outer Wilds
SteamWorld Dig
Snakebird
Monopoly
Chess
Slay the Spire
Among Us
Stephen's Sausage Roll
Amazing Chicken Adventures
Wingspan
Can of Wormholes
Disco Elysium
Overcooked
Return of the Obra Dinn
Inscryption
Slice and Dice
Teardown
Astroneer
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
88
83
80
80
80
80
76
76
75
73
72
71
70
70
70
68
67
66
66
65
65
63
63
62
62
61
59
59
56
56
54
53
52
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
44
41
40
40
38
35
34
34
34
32
28
28
25
25
25
24
15
15
10
10
5
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
YES
NO
3176
1665
OptionProbability
Steven Greer's claims that zero point energy technology has existed since the 50s and could produce abundant free energy for all, but has been kept hidden from the public because it would make oil obsolete.
imports of luxury vehicles have increased to Ukraine. Even many Rolex sellers in Kyiv
Biden voted for trump
There are 100 million illegal immigrants in the US
The COVID-19 Pandemic was a planned event to control populations or was intentionally created by entities like bill gates for depopulation or profit from vaccines
The 2020 election was “rigged”: see description for further resolution criteria
Technologies such as HAARP are a way to disable nuclear ballistic missiles in flight
Technologies, such as HAARP, are being used with a large impact to control weather patterns or cause natural disasters for political or economic gain
5G cellular networks were intentionally developed to create health issues or mind control.
Donald Trump is secretly working to dismantle the cabal associated with QAnon
17
13
11
10
10
10
10
7
7
6
OptionProbability
Brakes hit
Capability limit in AI
Huge alignment effort
New AI paradigm
Slow and gradual capability gains in AI
Enhancing human minds and/or society
Major capability limit in AI
Non-AI tech
Alignment relatively easy
Brakes not hit
Alignment unnecessary
Well-behaved AI with bad ends
Alignment extra hard
56
50
49
49
48
44
42
38
35
28
25
19
1
OptionProbability
EY suffers severe mental degeneration leading to delusions
EY's mind changes such that a boring explanation is worldview-shattering
Aliens
Demons (as in a traditional Abrahamic religion)
Idealism (Consciousness underpins physical reality)
Simulation Hypothesis
UFOs are time travelers
Holographic Principle
UFOs are angels
UFOs are psychic manifestations
Eliezer knows aliens are real and will be disclosed soon. He's rallying people to dump money into a bet he is secretly betting against to teach them about the fallacy of appeal to authority.
Other
UFOs are a product of a massive conspiracy unrelated to non-human entities
Ultraterrestrials (home-grown, advanced non-human civs, hidden from us)
UFOs are part of/directed by a superintelligence
AI is harder than practical interstellar travel
UFOs are aliens intentionally both hiding themselves and revealing themselves to sow chaos
UFOs are magic
15
13
8
7
7
6
6
6
6
6
6
5
4
3
1
0
0
0
OptionProbability
Other
OpenAI
Meta
Deep Mind
Anthropic
@Mira
39
24
17
14
3
2
1
OptionProbability
Other
Before humanity colonizes the universe, we must ensure that the future we would build is one worth living in.
Digital minds research is an important and neglected approach to AI safety.
Fun Fact: If you put “fun fact” before a completely made up statement, people are 69% more likely to believe it.
Past-you may have been a willing and enthusiastic sacrifice to present-you, and assuming you'll remain wiser, it was a worthwhile trade.
It's a good idea to buy lots of Microsoft stock right now
It's a good idea to short lots of Microsoft stock right now
If you sacrificed what you valued the most in order to survive, then from viewpoint of past-you you-present are already as good as dead
Capitalism will collapse in 2026
Stop seeking wisdom on a troll website founded to embezzle money
43
34
16
2
2
1
1
1
0
0