OptionProbability
Other
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
AGI is never built (indefinite global moratorium)
Eliezer finally listens to Krantz [resolves NO]
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
Someone solves agent foundations
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
Ethics turns out to be a precondition of superintelligence
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Humans become transhuman through other means before AGI happens
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
Orthogonality Thesis is false.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
We successfully chained God
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
🫸vibealignment🫷
18
12
12
10
3
3
3
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Elephant - Loxodonta africana 🥈🥈
Yellow-Lipped Sea Krait - Laticauda colubrina
Red Fox - Vulpes vulpes 🥉
Platypus - Ornithorhynchus anatinus 🥉
[⚔️Frontrunner] Prehistoric Elephant - Palaeoloxodon namadicus 🥉🥉
Sonoran Desert Sidewinder - Crotalus cerastes cercobombus
Ice Worm - mesenchytraeus solifugus
Dog - Canis familiaris
Cat - Felis catus
Tiger - Panthera tigris
Dolphin - Tursiops truncatus
Penguin - Aptenodytes forsteri
Lion - Panthera leo 🥈
Axolotl - Ambystoma mexicanum
Sea Otter - Enhydra lutris
Honey Badger - Mellivora capensis 🥈
Polar Bear - Ursus maritimus 🥈
Wolverine - Gulo gulo
Cape Buffalo - Syncerus caffer caffer
Great White Shark - Carcharodon carcharias
Blue Whale - Balaenoptera musculus
Common Bed Bug - Cimex lectularius
Giant Nematode - Placentonema gigantissima
Pompeii Worm - Alvinella pompejana
Sperm Whale - Physeter macrocephalus
Penis snake - Atretochoana eiselti
Giant Dragonfly - Meganeuropsis permiana
Giant Pacific Octopus - Enteroctopus dofleini
Triceratops - Triceratops horridus
Kiwi - Apteryx australis
Brown Rat - Rattus norvegicus 🥉
House Mouse - Mus musculus domesticus
Red Volcano Sponge - Acarnus Erithacus
Sulfur Cave Molly - Poecilia sulphuraria
Meerkat - Suricata suricatta 🥉
Housefly - Musca domestica 🥉
Feral pigeon - Columba livia urbana
Coyote - Canis latrans
Great black-backed gull - Larus marinus
Leopard seal - Hydrurga leptonyx
Western lowland gorilla - Gorilla gorilla gorilla
Eastern lowland gorilla - Gorilla beringei graueri
Brownthroated three-toed sloth - Bradypus variegatus 🥉
Bornean orangutan - Pongo pygmaeus
[⚔️Underdog] Haast's Eagle - Hieraaetus moorei
❌Defeated: Hippopotamus - Hippopotamus amphibius 🥈🥈🥈 [beat by Brown Rat]
❌Defeated: Giant Panda - Ailuropoda melanoleuca [beat by Polar Bear]
❌Defeated: Orca - Orcinus orca [beat by Housefly]
❌Defeated: Hotwheels sisyphus spider - Hotwheels sisyphus [beat by Saltwater Crocodile]
❌Defeated: Saltwater Crocodile - Crocodylus porosus 🥈🥈🥈🥈 [beat by Red Fox]
🚫Ineligible: Water bear - Milnesium tardigradum
❌Defeated: Ping-Pong Tree Sponge - Chondrocladia concrescens 🥉 [beat by Prehistoric Elephant]
🚫Ineligible: human [I'm shortening this]
🚫Ineligible: Tardigrade (LFG)
🚫Ineligible: [This animal was edited in bad faith]
❌Defeated: Wolf - Canis lupus [beat by Prehistoric Elephant]
❌Defeated: Cliff Swallow - Petrochelidon pyrrhonota [beat by Elephant]
❌Defeated: Flightless Elephant Bird - Aepyornis maximus [beat by Saltwater Crocodile]
❌Defeated: Great evening bat - Ia io [beat by Lion]
❌Defeated: Funny valentine spider - Funny valentine [beat by Dermophis donaldtrumpi Caecilian]
❌Defeated: Yi qi dinosaur - Yi qi [beat by Elephant]
❌Defeated: Aha ha wasp - Aha ha [beat by Hippopotamus]
❌Defeated: Colossal Whale - Perucetus colossus [beat by Meerkat]
❌Defeated: Cuban Cockroach - Panchlora nivea [beat by Hippopotamus]
❌Defeated: Panther Chameleon - Furcifer pardalis [beat by Saltwater Crocodile]
❌Defeated: Giant Pterosaur - Quetzalcoatlus northropi [beat by Ping-Pong Tree Sponge]
❌Defeated: Lesser Flamingo - Phoeniconaias minor [beat by Honey Badger]
❌Defeated: Dermophis donaldtrumpi Caecilian - Dermophis donaldtrumpi 🥈[beat by Brownthroated three-toed sloth]
❌Defeated: New World Screwworm - Cochliomyia hominivorax [beat by Hippopotamus]
❌Defeated: Chinese giant salamander - Andrias davidianus [beat by Saltwater Crocodile]
🚫Ineligible: [edited in bad faith]
❌Defeated: Australopithecus - Australopithecus afarensis [beat by Platypus]
Other
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
1053
961
OptionProbability
Seattle Seahawks (Last Extreme - Super Bowl Winner 2013-2014)
New England Patriots (Last Extreme - Super Bowl Winner 2018-2019)
Philadelphia Eagles (Last Extreme - Super Bowl Winner 2024-2025)
Detroit Lions (Last Extreme - Worst Team 2008-2009)
Kansas City Chiefs (Last Extreme - Super Bowl Winner 2023-2024)
Los Angeles Rams (Last Extreme - Super Bowl Winner 2021-2022)
New York Giants (Last Extreme - Super Bowl Winner 2011-2012)
Denver Broncos (Last Extreme - Super Bowl Winner 2015-2016)
Buffalo Bills (Last Extreme - Worst Team 1986-1987)
Baltimore Ravens (Last Extreme - Super Bowl Winner 2012-2013)
Green Bay Packers (Last Extreme - Super Bowl Winner 2010-2011)
Washington Commanders (Last Extreme - Super Bowl Winner 1991-1992)
Dallas Cowboys (Last Extreme - Super Bowl Winner 1995-1996)
Pittsburgh Steelers (Last Extreme - Super Bowl Winner 2008-2009)
Chicago Bears (Last Extreme - Worst Team 2022-2023)
Minnesota Vikings (Last Extreme - N/A)
Houston Texans (Last Extreme - Worst Team 2013-2014)
Tampa Bay Buccaneers (Last Extreme - Super Bowl Winner 2020-2021)
Los Angeles Chargers (Last Extreme - Worst Team 2003-2004)
San Francisco 49ers (Last Extreme - Worst Team 2004-2005)
Cincinnati Bengals (Last Extreme - Worst Team 2019-2020)
Jacksonville Jaguars (Last Extreme - Worst Team 2021-2022)
Tennessee Titans (Last Extreme - Worst Team 2024-2025)
Miami Dolphins (Last Extreme - Worst Team 2007-2008)
Indianapolis Colts (Last Extreme - Worst Team 2011-2012)
Atlanta Falcons (Last Extreme - Worst Team 1987-1988)
Carolina Panthers (Last Extreme - Worst Team 2023-2024)
Arizona Cardinals (Last Extreme - Worst Team 2018-2019)
Las Vegas Raiders (Last Extreme - Worst Team 2025-2026)
New York Jets (Last Extreme - Worst Team 1995-1996)
New Orleans Saints (Last Extreme - Super Bowl Winner 2009-2010)
Cleveland Browns (Last Extreme - Worst Team 2017-2018)
Las Vegas Raiders (Last Extreme - Worst Team 2006-2007)
99
74
66
63
63
60
57
56
55
55
55
54
50
50
50
47
47
47
47
47
45
45
43
41
41
41
41
41
41
37
37
26
0
OptionProbability
AFC East: Buffalo Bills
AFC North: Baltimore Ravens
NFC East: Philadelphia Eagles
NFC West: Seattle Seahawks
AFC East: New England Patriots
AFC West: Denver Broncos
NFC West: Los Angeles Rams
AFC South: Houston Texans
NFC South: Tampa Bay Buccaneers
AFC West: Kansas City Chiefs
NFC North: Detroit Lions
NFC North: Green Bay Packers
AFC South: Jacksonville Jaguars
AFC West: Los Angeles Chargers
NFC East: Dallas Cowboys
NFC South: Carolina Panthers
AFC North: Cincinnati Bengals
NFC South: New Orleans Saints
NFC South: Atlanta Falcons
NFC North: Chicago Bears
AFC South: Indianapolis Colts
NFC West: San Francisco 49ers
AFC North: Pittsburgh Steelers
NFC North: Minnesota Vikings
NFC East: Washington Commanders
NFC East: New York Giants
AFC South: Tennessee Titans
AFC West: Las Vegas Raiders
AFC North: Cleveland Browns
AFC East: Miami Dolphins
AFC East: New York Jets
NFC West: Arizona Cardinals
54
51
43
41
40
38
37
36
34
33
33
33
32
32
29
29
26
26
25
24
22
22
21
19
16
16
13
7
7
6
5
4
OptionProbability
Wins a preseason game
QB Skylar Thompson starts 1 game or more during 2024 regular season
The team moves or is renamed before 2050
Scores over 23.5 points in Week 1
Drafts at least one QB in the NFL Draft
Two Running Backs with over 1000 all-purpose yards
Tua starts at least 16 games
Positive Rush EPA in the 24/25 regular season
Any player (non-QB) scores 3 or more TDs in a game
100
100
48
0
0
0
0
0
0
OptionVotes
YES
NO
354
95
OptionProbability
Malik Willis
Quinn Ewers
Other
Zach Wilson
Tua Tagovailoa
89
4
3
2
1
OptionProbability
Las Vegas Raiders
New York Jets
Pittsburgh Steelers
Los Angeles Rams
Cleveland Browns
Arizona Cardinals
Miami Dolphins
New Orleans Saints
Dallas Cowboys
Tampa Bay Buccaneers
San Francisco 49ers
Detroit Lions
96
87
82
59
59
54
40
24
24
24
24
24
OptionVotes
YES
NO
300
33
OptionProbability
New York Jets
Other
Kansas City Chiefs
Houston Texans
Denver Broncos
Dallas Cowboys
Minnesota Vikings
New England Patriots
Buffalo Bills
Miami Dolphins
75
5
3
3
3
3
3
3
3
3
OptionVotes
YES
NO
523
329

