OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
19
18
16
9
9
6
6
5
3
2
1
1
1
1
0
0
0
0
OptionProbability
Any model announced before 2034
Any model announced before 2033
Any model announced before 2032
Any model announced before 2031
Any model announced by Google before 2030
Any model announced before 2029
Any model announced before 2030
Any open-weights model announced before 2030
Any model announced by xAI before 2030
Any model announced by Anthropic before 2030
Any model announced by OpenAI before 2030
Any model announced before 2028
Any model announced by a Chinese lab before 2030
Any model announced by Meta before 2030
Any model announced by SSI before 2030
Any model announced before 2027
GPT-6
Gemini 4
Any model announced before July 1, 2026
Any Claude 5 model
Grok 5
Claude 5 Opus
Gemini 3.5
Claude 3.5 Opus
OpenAI o4
DeepSeek-V4
GPT-5
Grok 3
OpenAI o3
Any model announced before 2026
Llama 4
GPT-4.5
Gemini 2.5
Gemini 3
Any Claude 4 model
grok-4
Sonnet 4.5
Kimi K2
GPT-5.1
Claude 4.5 Opus
GPT-5.2
Opus 4.6
90
88
86
84
82
80
80
80
74
73
72
67
65
56
54
29
25
25
21
15
15
15
14
13
10
9
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
10
8
7
6
6
6
6
6
4
4
4
3
2
2
1
1
1
OptionProbability
Isaac likes experimenting
You'd say you're more of a dog person than a cat person
You've been in a desert once
You regret ever making WvM (feel free to n/a if you'd rather not say)
You have ever looked through list of a Manifold user's bets to determine their position on some question
You've been vegetarian for >1 continuous year
(if you're not married) you've thought about marrying your current partner (if you're in a relationship)
You like listening to violin more than you like listening to piano
You're older than me (I'm 23)
you consider your partner to be your best friend as well
You're younger than 30
You think it's fine for minors to date adults
You consider yourself part of the rationality sphere
You think of yourself as a dog person more than a cat person
You seem to enjoy indie stuff more than the average person does
you've defected on some sort of deal you made with another person
The first book you remember reading in your life has an animal as its main character
You tend to wear the same outfit every day.
You think the effective altruism philosophy is good but the community overall is ineffective in implementing that philosophy
You've spent a day in which you read a (single) book (or similar) for 10+ hrs
You're an effective altruist
You are qorrenqial
You've read the CFAR Handbook
You've held a gun in your hands before
You've fired a gun at a target before
You've jumped out of a plane
You have attended some program or camp organised by MIRI
You've knowingly misresolved a market once even though you didn't get called out for it
You've been to a nightclub
You've passed out from drinking alcohol
You like to eat out more than you like to cook and eat
You've worked with LED strip lights
You've wondered what it'd be like to have a different name
*You* have more than 6 stuffed animals
You know >4 programming langauges
You know >= 3 natural languages (in at least one of reading, speaking, or hearing understanding)
You've felt desires to purchase a car with mana
You have a political belief that'd cause at least one close friend to cut ties if they knew about it
You wish you spent significantly less time on Manifold
You sleep more during 7am->7pm hours than during 7pm->7am hours
You prefer if these answers start with "You" than with "isaac"
You feel you're the smartest member of your close family (mother, father, siblings)
You have thought about shooting lasers from your eyes at traffic lights
You have once made a joke about your name's similarity with Martin Luther King Jr.
You think the effective altruism community is ineffective at PR and image management
You have done some coding in python
100
100
100
100
100
100
100
100
100
100
100
90
77
70
53
49
49
41
32
30
20
8
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Something that helps maintain an elaborate deception involving a pit fiend.
Making Carissa seem like a god to her (ADDED LATE, MAY BE DISQUALIFIED EVEN IF CORRECT)
Something to assist in extracting information from her mind.
Other
A bluff.
Something to prevent her from committing suicide
Something that will do something unpleasant to her if removed, in order to avoid situations where she enters an Antimagic Field or gets hit by a powerful Dispel.
Something to prevent self-deception, in hopes she will realize what awaits her in Hell and voluntarily help them.
Something that helps locating her or bringing her back if she escapes/leaves
Something to impair her in a way potentially useful for multiple purposes, like the abilitystat penalties.
Something else not predicted by any other answer at the time the market closes.
Makes Abrogail's body into a weapon (like a bomb, or plague vector, or something)
Something to further reduce her abilitystats and be removed bit by bit in order to fake Wish-enhancement.
The answer will never be officially revealed.
Something to help her learn rationality
77
19
2
1
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
NO
YES
1013
926
OptionProbability
Something that's physical and tangible
Joyboy's treasure
Some sort of drink or cup
A way to connect the four seas, putting the ocean in one piece
Ancient Weapons are part of unlocking one piece
Something that would help with dethroning Imu
One piece is at the same location with the Devil Tree
Binks' Sake
something that fulfills the wish of all of Straw Hat Pirates
Something directly related to devil fruit
83
72
56
54
50
50
50
41
28
28
OptionProbability
He moves to another university, away from Harvard by mid 2026
Will he have a child by mid 2026? Best wishes to him and his family
He writes a book which appears on the nytimes bestseller list by mid 2029
He joins George Mason University by mid 2026
He spends at least a year overseas by mid 2029
He wins a Nobel prize by mid 2036
He joins Manifold and is verified by mid 2026
He moves to another university, away from Harvard by mid 2024
He moves to another university, away from Harvard by mid 2025
He retires from academic life by mid 2024
Harvard withdraws or softens the results of the investigations which found against him by mid 2025
Harvard apologizes for the claims against him and admits they were without merit by mid 2025
64
57
44
33
31
24
24
0
0
0
0
0
OptionVotes
YES
NO
110
91
