OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
17
11
10
10
8
7
6
5
5
4
3
2
2
2
2
1
1
1
OptionProbability
Trump will be the POTUS
FED rates will be below 5% on 1st January 2025
MrBeast Channel Subscribers will be between 300M to 350M on 1st Jan 2025
At Least 1 earthquake with magnitude 8.0 or higher
1 or more Jan 6 rioters get pardoned
Israel attacks Iranian nuclear facilities
Hollow Knight Silk song released
LEMMiNO releases a new video
Elon removed as doge head
Pope dies
Dodgers win World Series
Yoon Suk Yeol impeachment upheld by supreme court
Its revealed that Elon Musk had more children then 13 already known
Famous global brand goes through a rebranding
GPT 5 releases
T1 three-peat as League of Legends world champions
Los Ratones win against T1, or lose 2-3 (either case should resolve YES, if they don't play against each other this should be NO)
This whole market gets more than 300 traders
Russia retains de facto control over Crimea
Avatar: Fire and Ash released
King Charles will be alive for the whole year of 2025
FED rates will be below 4% on 31st December 2025
Any incumbent world leader diagnosed with cancer after 28th January 2025
Alan's Conservative Countdown to AGI >= 97%
There will be less MAU at manifold in December 2025 then there would be in January 2025
In one of the monthly polls, >= 80.0% of Manifold users who provide an opinion agree that weak AGI has been achieved
Yoon Suk Yeol convicted of insurrection
Saudi Arabia - Israel normalize relations
MKBHD marries his girlfriend Nikki Hair
At least one world leader (prime minister, president, or monarch) gets assassinated while in office
Department of Education ended by executive order (even if temporarily)
Ali Khamenei assassisnated
Ali Khamenei dies a natural death
Ukraine internal coup attempt
Elections in Bangladesh come to pass (currently planned for late 2025 to early 2026)
At least one Premier of a Canadian province proposes joining the USA
Any newspaper reports a US Default on any debt
In one of the monthly polls, a plurality of Manifold users declares the achievement of weak superintelligence
Chuck Schumer resigns
Californian independence gains enough signatures
Attempted assassination on Zelensky in which he gets hurt but survives
Attempt to assassinate trump in 2025
Russia-Ukraine war will end in 2025
Harry and Meghan divorce
Early general elections in Pakistan
Humanity's Last Exam passed with a grade of 80%
US-China war
Biden gets hospitalised
EF3+ Tornado Impacts Florida
United States government acknowledges the existence of life anywhere other than on or close to Earth
Kate and William divorce
Sam Altman gets fired again
At least one Premier of a Canadian province takes a concrete step, such as holding a referendum, to join the USA
Humanity's Last Exam passed with a grade of 90%
One of the Millenium prize problems falls to a model
Zelensky Assassinated
Article 5 of NATO invoked
Marco Rubio fired or "resigns" upon rumors of his impending termination
Elon Musk diagnosed with COVID-19
Congress repeals any tariff
In one of the monthly polls, >= 33.3% of Manifold users who provide an opinion agree that non-human intelligence has influenced Earth
A country leaves NATO
NATO at war with Russia
A nuclear bomb explodes
FrontierMath solved (>= 80%) and model is available to any person in the United States willing to pay for it
Bitcoin reaches 150k sometimes during 2025
United States government acknowledges the existence of non-human intelligence on or close to Earth
Trump assassinated
2025 hotter then 2024
USA claims intelligences of extra-terrestrial origin have credibly visited or sent technology to a location within our solar system in an official government statement
Putin assassinated
OpenAI o6 or any o6 variant released
China annexes Taiwan
Filibuster abolished
Putin dies
2025 is the 15th "Year of Three Popes"
Los Ratones go to LoL Worlds
Mike Johnson is not Speaker
Biden dies
Bethesda releases a new game in the Elder Scroll series
Taylor Swift will be "single" anytime during 2025
US debt ceiling will be scrapped during 2025
Apple will release first foldable smartphone during 2025
The winds of winter by GRR Martin will release
LK99 replicates at last
Greenland Joins USA
End of the world by celery
Ukraine joins NATO
Humanity's Last Exam passed with a grade of 80% on or before June 30
A US state secedes from the union
51st US State
A country leaves the EU
Phillies win the World Series
Peter Turkson elected Pope
GTA 6 releases
After the release of GPT-5, every monthly Manifold poll results in a plurality of respondents agreeing that weak AGI has been achieved
Federal Reserve cuts rates to zero
2 or more hurricanes make landfall in the US
Harris will be the POTUS
Bitcoin will be > 100k on 1st January 2025
MrBeast Channel Subscribers will be below 300M on 1st Jan 2025
MrBeast Channel Subscribers will be above 350M on 1st Jan 2025
Jimmy Carter will be alive on 1st January 2025
Jimmy Carter will be alive on 31st December 2025
Humanity's Last Exam passed with a grade of 80% on or before March 31
Francis is always Pope
New pope takes name "Francis"
First papal conclave >= 10 ballots
First conclave elects Pope on first ballot
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
99
99
99
98
98
96
93
68
43
20
19
16
10
9
9
9
8
7
7
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
54636
1830
OptionProbability
Tumbles is late, Canada does not enter a recession.
Tumbles is never late, Canada does not enter a recession.
Tumbles is never late, Canada enters a recession.
Tumbles is late, Canada enters a recession.
99
1
0
0
OptionProbability
Tumbles will lose it all before 2026 (all-time profit less than -Ṁ3,269,694)
Tumbles will not make a payment on it before receiving a new loan of at least 20k mana from anyone
Tumbles will not pay it off in full before 2027
Tumbles will quit Manifold before repaying the loan
Tumbles will not pay any of it off before 2026
Tumbles will not pay any of it off before 2027
Tumbles will have to pay it off in some form, and will be late on a payment before 2026
Tumbles will have to pay it off in some form, with payments due before 2026, and will not be late on any payment before 2026
Tumbles will "pay" it back but not with Mana...
Manifold will "loan" at least Ṁ1,000,000 more to Tumbles before 2026
It will be wiped/solved/paid by @Mira
100
100
98
94
20
8
4
3
3
2
2
OptionVotes
YES
NO
15703
6520
OptionProbability
Peter Njeim is permanently banned
Tumbles’ loan repayments are cancelled
Tumbles is banned temporarily (i.e. not permanently)
The main “Will Tumbles be late to pay back a loan” market will be resolved by a mod
N/A, option added by mistake
Mods approval needed to resolve high worth markets
Tumbles is permanently banned
Manifold changes mechanics/permissions required for managrams
Adding a market setting or flag/filter where the market owner can't trade in that market
Tumbles is bailed out by Manifold
Tumbles is bailed (>2mil loan/transfer) by non-staff/non-admin user
The main "Will Tumbles be late to pay back a loan?" market will resolve N/A
100
100
100
100
50
4
3
3
3
2
2
1
OptionProbability
Jeopardy
Sesame Street
White Lotus
Simpsons
Battlebots
Cyberchase
Wheel of Fortune
Spongebob Squarepants
Homestar Runner
Peppa Pig
Bobs Burgers
Survivor
Antiques Roadshow
Rick and Morty
Shark Tank
Ancient Aliens
Paw patrol
NCIS
South Park
Bluey
Ghost Adventures
Storage Wars
Deadliest Catch
Family Feud
Black Mirror
Always Sunny in Philadelphia
Pawn Stars
Phineas and Ferb
Severence
American Pickers
Family Guy
Abbott Elementry
Teen Titans Go!
Robot Chicken
Beavis and Butt Head
Law & Order (revival)
The Incredible Dr Pol
Ghost Hunters
Impractical Jokers
American Horror Story
Law & Order: SVU
Grey's Anatomy
Forged in Fire
The Bear
Hazbin Hotel
Is it Cake
American Dad
True Detective
Futurama
Curb Your Enthusiasm
Red vs Blue
Stranger Things
Late show with Stephen Colbert
98
97
92
91
90
90
90
89
89
88
88
86
85
81
76
74
73
70
67
66
61
60
60
60
58
57
56
56
55
54
50
50
50
50
50
50
50
50
50
50
50
50
46
45
40
38
38
29
28
21
17
6
0
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
10
8
7
6
6
6
6
6
4
4
4
3
2
2
1
1
1
OptionProbability
Eliezer Yudkowsky remains unwavering that the probability of AI annihilation is greater than or equal to 95%
Eliezer Yudkowsky is alive during the entire month
Sam Altman still CEO through end of month
noam chomsky alive until end of month
at least 3 xkcd comics with no stick figures in it
Federal Reserve cuts interest rates
Volcano dormant for half a year erupts
trump posts on twitter/x ten or more times
NASDAQ hits an all time high
nVidia becomes the largest company in the history of the world (by market capitalization) at least once
nyc receives 3 or more inches of rain
web3isgoinggreat has at least 20 posts this month
large tech company announces layoffs by my judgement
a tesla catches fire as reported by mainstream news
trump leaves usa at least once
Anthropic or Meta release a new model
$500 in mana sold in a single day during the month
An original, Nintendo approved game with "Mario" in its title is announced
earthquake 7.0+ magnitude
New Half-Life game announced
major tech company merger/acquisition announced
spacex launches 15 or more rockets
spider man beyond the spider verse release date announced
another indictment revealing a right wing media personality is taking money from russia
a supreme court justice is replaced, or announces retirement
luffy finds the one piece
earthquake 7.5 magnitude or higher
another indictment or charges announced against trump
Tumbles is late to pay back a loan https://manifold.markets/Tumbles/will-tumbles-ever-be-late-to-pay-ba
destiny goes on joe rogan
israel hamas ceasefire
usa bombs or missle strikes lebanon
twitter/x banned in another country (announced, even if it can be circumvented)
ukraine russia ceasefire
usa troops enter lebanon
china taiwan military conflict resulting in at least 1 death
bitcoin reaches 150k or more at least one day
winds of winter (next GoT book) release date announced
usa troops enter syria
usa bombs or missle strikes syria
Hurricane landfalls in Florida
american airstrike or drone strike or missle strike on iran
100F temperature in nyc at least one day
starship launch
dick cheney still alive at end of month
Eliezer Yudkowsky is not alive for part of the month, but is alive on December 31.
silksong game releases
silksong game release date announced
99
99
99
98
96
86
83
81
80
74
72
69
69
69
58
50
50
47
37
32
31
31
31
31
31
20
20
20
14
10
8
5
5
4
3
2
2
2
2
2
2
2
1
1
1
1
0
0
OptionProbability
Other
Roman Republic / Empire
Late renaissance Germany
Postclassic / Colonial period Mesoamerica (e.g. Mayan city states, Aztec Empire, Spanish Empire)
Qin Dynasty China
Late Horizon / Colonial period Andean America (e.g. Inca Empire, Spanish Empire)
Colonial period West Africa (e.g. Ashanti, Dahomey, Benin)
Golden Age India (e.g. Gupta Empire)
48
31
8
3
2
2
2
2
OptionVotes
NO
YES
54
46

