OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
16
14
12
10
8
8
7
6
5
5
5
2
1
1
0
0
0
0
OptionProbability
Collingwood
Adelaide Crows
Geelong Cats
Brisbane Lions
Western Bulldogs
Hawthorn
Gold Coast Suns
Sydney Swans
Port Adelaide
GWS Giants
Carlton
Fremantle
Other
Melbourne
29
17
14
10
9
6
4
2
2
2
2
2
2
1
OptionProbability
Packers
Bills
Lions
Eagles
Commanders
Patriots
Ravens
Chargers
Chiefs
Buccaneers
Bengals
49ers
Giants
Jets
Seahawks
Broncos
Colts
Rams
Cowboys
Texans
Dolphins
Falcons
Bears
Jaguars
Steelers
Raiders
Vikings
Cardinals
Titans
Panthers
Other
Browns
Saints
11
10
8
8
8
6
5
5
5
5
5
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
OptionProbability
Buffalo Bills
Philadelphia Eagles
Baltimore Ravens
Kansas City Chiefs
Denver Broncos
Detroit Lions
Green Bay Packers
Cincinnati Bengals
Los Angeles Rams
Los Angeles Chargers
San Francisco 49ers
Tampa Bay Buccaneers
Washington Commanders
Houston Texans
Dallas Cowboys
Arizona Cardinals
Pittsburgh Steelers
Minnesota Vikings
Miami Dolphins
New England Patriots
Atlanta Falcons
Chicago Bears
Indianapolis Colts
Seattle Seahawks
Jacksonville Jaguars
New York Giants
Cleveland Browns
Carolina Panthers
Las Vegas Raiders
Tennessee Titans
New York Jets
New Orleans Saints
84
84
83
82
79
78
77
67
67
62
62
57
54
54
51
51
48
48
45
41
41
39
36
34
33
31
31
30
30
24
20
12
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
OptionProbability
Cincinnati Bengals
Atlanta Falcons
Seattle Seahawks
Chicago Bears
New York Giants
San Francisco 49ers
Dallas Cowboys
Miami Dolphins
New England Patriots
Jacksonville Jaguars
Las Vegas Raiders
New York Jets
Arizona Cardinals
Carolina Panthers
Indianapolis Colts
Cleveland Browns
Tennessee Titans
New Orleans Saints
78
57
57
53
52
52
45
41
37
33
33
33
31
28
28
28
20
18
OptionProbability
None of the options submitted before the One Piece is revealed
Binks' Sake
A Poneglyph
Other
A single piece of currency (e.g. one coin)
An ancient tool which might not technically be a weapon.
The friends they made along the way
A giant pile of gold and jewels
An ancient weapon
Pineapples
A text, not written on a poneglyph
60
28
4
4
2
1
0
0
0
0
0
OptionProbability
Week 2: KC vs. Philadelphia Eagles (09/14)
Taylor Swift attends a 2025 NFL season playoff game
Taylor Swift & Travis Kelce engaged during NFL Regular Season
Taylor Swift attends a game at Highmark Stadium
Taylor Swift attends 2+ 2025 NFL season playoff games
Taylor Swift mentioned at 2025 NFL Honors
Taylor Swift attends Super Bowl LX
Week 3: KC @ New York Giants (09/21)
Week 4: KC vs. Baltimore Ravens (09/28)
Week 6: KC vs. Detroit Lions (10/12)
Week 7: KC vs. Las Vegas Raiders (10/19)
Week 8: KC vs. Washington Commanders (10/27)
Week 9: KC @ Buffalo Bills (11/02)
Week 11: KC @ Denver Broncos (11/16)
Week 13: KC @ Dallas Cowboys (11/27)
Week 14: KC vs. Houston Texans (12/07)
Week 15: KC vs. Los Angeles Chargers (12/14)
Week 16: KC @ Tennessee Titans (12/21)
Week 17: KC vs. Denver Broncos (12/25)
Week 18: KC @ Las Vegas Raiders (01/03 or 01/04)
Week 1: KC @ Los Angeles Chargers (09/05)
Week 12: KC vs. Indianapolis Colts (11/23)
Taylor Swift & Travis Kelce engaged before NFL Regular Season
Week 5: KC @ Jacksonville Jaguars (10/06)
Taylor Swift attends a 2025 NFL Preseason game
Taylor Swift & Travis Kelce split during NFL Regular Season
Travis Kelce proposes to Taylor Swift at Super Bowl LX
Taylor Swift & Travis Kelce split before NFL Regular Season
Taylor Swift & Travis Kelce split during playoffs
Taylor Swift does not attend any NFL Regular Season games
65
59
50
50
50
50
50
45
45
45
45
45
45
45
45
45
45
45
45
45
41
41
40
37
31
29
29
21
17
16
OptionProbability
Other
Imu himself
The giant strawhat in the basement of Mary Geoise
The ancient weapon Uranus
The final road poneglyph
91
66
62
34
14
OptionProbability
Ravens
Chiefs
Bills
Eagles
Lions
Bengals
49ers
Packers
Rams
Giants
Commanders
Cowboys
Cardinals
Broncos
Bears
Colts
Chargers
Dolphins
Falcons
Vikings
Buccaneers
Steelers
Texans
Seahawks
Jaguars
Patriots
Jets
Raiders
Titans
Saints
Browns
Panthers
Other
15
9
9
9
7
4
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
OptionProbability
I lose my engineer privilege
I don't think working on AI will make alignment races worse
AI will not be a giant pain to work with
There will be lots of well-paid AI job opportunities for me
70
53
51
37
OptionProbability
Other
Collingwood
Brisbane Lions
Geelong Cats
Sydney Swans
Western Bulldogs
Hawthorn
Port Adelaide
Fremantle
Carlton
GWS Giants
44
37
8
4
1
1
1
1
1
1
1