OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
19
18
11
8
8
6
5
5
5
5
4
3
1
1
0
0
0
0
OptionProbability
(JAP) Red flag during Qualyfing session
[BAH] Safety car during the race
[BAH] Yuki Tsunoda scores points
[BAH] Bearman scores points
[SAU] Safety Car during the race
[SAU] Tsunoda makes it into Q3
[SAU] Leclerc will get top 4 in the race
[MIA] Norris qualifies ahead of Piastri (main race qualifying)
[MIA] Non-Mclaren pole in Sprint Qualifying
[MIA] Piastri finishes ahead of Norris in GP
[MIA] Williams double points in the main GP
[MIA] Over 3.5 Drivers DNF in the main race
[MIA] Both McLaren on the podium
[IMO] Physical Safety Car during the race
[IMO] Max Verstappen Podium Finish
[IMO] Colapinto replaces Doohan in at least one session
[IMO] Piastri qualifies ahead of Norris
[IMO] Tsunoda scores point(s)
[MON] Red Flag during Qualifying
[MON] Pole Sitter wins the race
[MON] At least one impeding penalty during qualifying
[MON] 4+ pit stops in the first 5 laps
[MON] Norris scores more points than Piastri
[MON] Yellow flag on the 1st lap of the race
[MON] Both Ferraris finish in top 5
[SPA] Both Ferraris reach Q3
[SPA] McLaren Pole Position
[SPA] McLaren 1-2 (race)
[CAN] Physical Safety Car during the race
[CAN] Red Flag during Qualifying
[CAN] Stroll starts the race
[CAN] Impeding penalty during Qualifying (at least 1)
[CAN] Mercedes Podium
[CAN] Hulkenberg finishes in top 10
[CAN] Leclerc finishes ahead of Hamilton
[AUT] Physical Safety Car during the race
[AUT] Yellow flag first lap
[AUT] Bortoleto reaches Q3
[AUT] McLaren 1-2
[AUT] Leclerc finishes ahead of Hamilton
[AUT] Bortoleto finishes top 10
[GBR] Yellow Flag first lap
[GBR] Physical Safety Car during the race
[GBR] McLaren 1-2 (race)
[GBR] Non-McLaren pole position
[GBR] VSR virtual safety car during the race.
[GBR] Gasly finishes in top 10
[BEL] Physical Safety Car during the main race
[BEL] McLaren double podium (main race)
[BEL] Intermediate or wet weather tyres needed for the main race
[BEL] A Ferrari driver doesn't reach Q3 (main race qualifying)
[BEL] Bortoleto reaches Q3 (main race qualifying)
[BEL] Leclerc podium (main race)
[BEL] Bortoleto in the top 10 (main race)
[BEL] Hamilton top 10 (main race)
[HUN] McLaren 1-2 in the race
[HUN] Leclerc finishes ahead of Hamilton in the race
[HUN] Antonelli finishes in the points
[HUN] Bortoleto finishes in the points
[HUN] Both Aston Martins score points
[HUN] Leclerc is in first position at the end of lap 1
[NLD] Physical Safety Car during the race
Leave it open for upcoming GP
[NLD] Impeding penalty during qualifying (at least 1)
[MIA] Safety car in the main race
[BEL] Yellow flag first lap (main race)
(JAP) Safety car during the race
(JAP) A rookie crashes and cannot finish the race (technical problems at the gearbox, power unit etc. won't count, it needs to be a crash)
(JAP) Yuki Tsunoda scores points
[JAP] Lawson finishes ahead of Tsunoda
[BAH] Lawson finishes ahead of Tsunoda
[BAH] Ferrari on the podium (at least 1 driver)
[SAU] Mercedes podium (at least 1 driver)
[SAU] McLaren double podium
[SAU] Bearman scores points
[MIA] 🇮🇹 Kimi Antonelli wins the sprint race
[MIA] 🟥 Red flag during main race qualifying
[MIA] Antonelli on the Podium
[IMO] Piastri scores more points than Norris
[IMO] Leclerc finishes ahead of Hamilton (race)
[IMO] Aston Martin both drivers score points
[MON] Physical Safety Car during the race
[MON] Both Mercedes score points (i.e. top 10)
[SPA] Virtual Safety Car during the race
[SPA] Yellow flag first lap
[SPA] Tsunoda goes into Q3
[SPA] Sauber reaches Q3 (at least one driver)
[SPA] someone receives a penalty for obstructing during qualifying
[SPA] Tsunoda finishes in top 10 (race)
[SPA] Hamilton finishes ahead of Leclerc (race)
[SPA] Bortoleto finishes in top 10 (race)
[CAN] Yellow flag first lap
[CAN] Tsunoda reaches Q3
[CAN] Both Williams reach Q3
[CAN] Williams scores over 5.5 WCC points
[AUT] 3 different teams on the podium
[AUT] Tsunoda reaches Q3
[AUT] Stroll reaches Q3
[AUT] Impeding penalty during qualifying
[AUT] Williams scores points (at least one driver)
[AUT] Verstappen penalty for forcing another driver off the track or a collision.
[GBR] Both HAAS cars finish in the points (top 10)
[GBR] Tsunoda reaches Q3
[GBR] Williams reaches Q3 (at least 1 driver)
[GBR] At least one Ferrari on the podium
[GBR] Bearman finishes in top 10
[GBR] At least one Mercedes on the podium
[BEL] Two or more cars DNF (main race)
[BEL] Ferrari in top 3 in the Sprint Qualifying (at least 1 driver)
[BEL] McLaren 1-2 in the Sprint race
[BEL] Verstappen gets a penalty in either the sprint or main race
[BEL] Leclerc on the podium in the sprint race
[BEL] Red flag during qualifying (for the main race)
[BEL] main race is cancelled/aborted
[HUN] Physical Safety Car during the race
[HUN] Impeding penalty during qualifying (at least 1)
[HUN] Tsunoda reaches Q3
[HUN] Yellow flag first lap
[HUN] At least two drivers DNF
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
62
53
49
41
8
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
29345
3524
OptionVotes
YES
NO
1936
116
OptionProbability
Make the bread taste good
Don't eat anything for at least 48 hours before eating the bread
Stretch-and-fold after mixing, 3x every 30 min
Create indentation, fill with melted cheese and butter
Bake on upside-down sheet pan, covered with Dutch oven
Resolve this option YES while eating the bread
Donate the bread to a food pantry, homeless person, or someone else in need
Use sourdough instead of yeast
Sprinkle 3 grams of flaky sea salt on top of each loaf before the second bake
Watch the video
Autolyse 20 minutes
3 iterations of stretch-and-fold, at any time during the 14h waiting period. Minimum wait time between iterations 1 hour
More steam! Either spritz with more water (preferably hot) or actually pour some boiling water in just before closing the lid.
Make a poolish 12 h ahead: 100 g flour + 100 g water + 0.8 g yeast (0.1 %). After it ferments, use this poolish in place of 100 g flour and 100 g water in the final dough.
Bake it with your best friend.
Add 50g honey
Swap 200ml water for milk
Incorporate a whole grain flour (buckwheat for example)
Bake for an amount of minutes equal to the percent this market answer is at when it comes time to begin baking. (Maintain the ±3 minute tolerances and the 2:1 ratio of time before:after the water spritz.)
Use King Arthur Bread Flour instead of All-Purpose
Decompose it into infinite spheres, then a few parts per sphere, rotate the spheres by arccos(1/3), unite them and you will find 2 chilis (Banach-Tarski)
Let dough rise on counter only until double volume or 2h max, any time longer in fridge
Add lots of butter (0.2 ml per gram)
Use 50% whole grain flour
Ditch current process, do everything the same as the video
Toast the bread
Eat the bread while punching @realDonaldTrump in the face
Eat the bread while watching your mana balance steadily tick to (M)0
Throw the bread at a telescope
Add 50g sugar
Put a baking rack in the Dutch oven before putting the loaf in, raising the loaf off the floor and lofting it over a layer of air.
Replace all water spritz steps with a basting of extra virgin olive oil.
Use flour made from an unconventional grain e.g. barley, millet, oats, rye, sorghum, maize etc.
Assume the chili is not in the interval [0,1], square it for more chili, if it is in (0,1), take the square root, else (equals 0 or 1) add 1 to it.
Assume the chili is in the interval (0,1), square it for less chili, if it is in (1,infinity) take the square root, if it is in (-infinity,0) take the negative of the square of the of the chile, else (equals 0 or 1) subtract 1 from it.
Get your friends to help you make a batch ten times the size, but add a Pepper X (2.7M Scoville heat units) to the mixture
Add 1tsp of diastatic malt powder per 3cps of flour
replace 10% of flour with farina bona
Bake the bread into a fun shape, like a fish, or an octagon
While the bread is baking, tip every user who voted "Yes" on this option 25 Mana
Add 50g vital wheat gluten
Give ChatGPT your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Bread flour, 3x yeast, cut rise to ~3h
Use whole wheat to improve the nutrition of the bread
Add an amount of MSG equivalent to half the current salt content
Place small ice cubes between parchment and pot instead of water
Cook the bread with a rod/puck of aluminum foil (or similar) in the core in an attempt to conduct heat through the center of the bread, cooking it evenly like a doughnut.
Make all of the ingredients from scratch.
Add a pinch of sugar
Make the bread edible then throw it in
Buy bread from a michelin star restaurant.
Increase water by 50 g
Drink vodka while eating the bread
Cover bread with damp paper towel instead of initial water spritz. Rehydrate paper towel during 2nd spritz. Remove paper towel before placing on cooling rack.
Do FOLDED
Quit Manifold into the bread.
Kill the bread into Manifold.
Improve the bread
Start at 500F, drop to 450F and uncover half way through
Grind/powderize all salt used into a fine powder (with pestle & mortar or similar device)
it needs more salt
Add 1/2 cup yogurt to the bread and name the bread “gurt” while addressing it with “yo, gurt”.
Half yeast
Ship a piece of the bread to a random person.
Encourage people to participate in the market in good faith while making the bread
Add 2g? of baking soda
Let dough sit 48 hrs
Resolve this option NO while eating the bread
put butter into it
Mix half sodium/potassium chloride
Add a tablespoon of sugar
Bake for 5 fewer minutes
Bake one more minute
Make naan bread, an easy-to-make bread
Mail the bread to 1600 Pennsylvania Ave. Washington D.C.
Use tap water instead of fancy RO water
Frost it and put sprinkles on it to make it a birthday cake.
Add sawdust to increase the volume of the bread (but only like 10% sawdust by volume max. maybe 20% if it's good sawdust)
Add as many Jack Daniel's whiskey barrel smoking chips as feasible to the Dutch oven before baking, physically separating them from the bread as necessary while baking.
Eat the bread while sending all your mana to @realDonaldTrump
Bake the Manifold Crane into the Bread
Don't eat anything for at least 24 hours before eating the bread
Quadruple salt
Do all the changes in the top 5 open options by probability, excluding this option
Have someone sell the bread to you at an expensive price
Bake one fewer minute
Bake the cake while wearing a onesie.
Bake vegimite into it.
Use lemonade instead of water.
Bake for 5 more minutes
Replace salt with sugar
Eat the bread in front of the White House.
Bake vodka into it
Implement all options that resolved NO
Make the bread inedible then throw it out.
Replace flour with flowers
Throw the bread at @realDonaldTrump
Force Feed it to @realDonaldTrump
Think positive thoughts before tasting
Make the bread great again
Cut the bread into the number of traders in the market slices.
Only buy ingredients from 7/11.
Implementing every element listed below.
Put a non-lethal dose of any rat poison.
Just make donuts instead
Bake it in an easy bake kids oven
Use a plastic baking sheet.
Eat the bread while betting yes on Cuomo on Manifold
Ditch all the steps. Just buy the bread from the supermarket
Double oven temperature
Halve oven temperature
Play classical music while baking
Light it on fire with birthday candles.
Bake it with a microwave
Eat the bread while betting yes on Mamdani on Manifold
Wear a suit while baking the cake.
Bake your social security number into it.
Bring it to Yemen and put a bomb in it
Bake America Great Again
Sacrifice a lamb
Add MAGA and a splash of Trump juice
Bake in a cat and a dog
Explode it:
Take a fat dump in the dough
Sit in dough 24 hrs
Let dough sit 24 hrs
Bake in rectangular tin
double yeast
halve salt
Double salt
Add 2tsp olive oil
Refrigerate dough instead of room temp wait
Do not mix salt and yeast in water together
Put fork in microwave
Don't eat anything for at least 12 hours before eating the bread
Add 2tbsp vanilla extract
Eat the bread with friends
Bake it in the country you were born in.
Eat the bread over the course of a week.
Bake the bread with love
92
90
88
85
85
80
72
70
70
67
66
65
65
64
63
62
62
59
52
52
52
51
51
51
50
50
50
50
50
50
50
50
50
50
50
50
48
47
47
46
42
42
41
41
40
38
37
35
34
34
34
34
34
34
34
34
34
33
32
31
31
28
27
26
26
24
24
24
23
22
20
20
20
19
19
18
18
17
17
17
16
16
15
15
14
14
13
11
11
11
11
10
10
10
10
10
10
10
9
9
9
8
8
8
8
7
6
6
6
6
6
5
5
5
5
4
4
3
3
2
2
2
2
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Lando Norris (McLaren)
Oscar Piastri (McLaren)
Max Verstappen (Red Bull)
Charles Leclerc (Ferrari)
Lewis Hamilton (Ferrari)
Yuki Tsunoda (Previously Liam Lawson) (Red Bull)
George Russel (Mercedes)
Kimi Antonelli (Mercedes)
Fernando Alonso (Aston Martin)
Lance Stroll (Aston Martin)
Pierre Gasly (Alpine)
Jack Doohan (Alpine)
Alexander Albon (Williams)
Carlos Sainz Jr. (Williams)
Liam Lawson (Previously Yuki Tsunoda) (VCARB)
Isack Hadjar (VCARB)
Nico Hulkenberg (Sauber/Stake)
Gabriel Bortoleto (Sauber/Stake)
Esteban Ocon (Haas)
Ollie Bearman (Haas)
32
32
28
2
2
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Mark Manson
Adam M. Curry
Jaan Tallinn
Eric Czuleger
Jack Dorsey
Bryan Johnson
Liron Shapira
Andrew Bustamonte
Graham Handcock
Andrew Gallimore
Hamilton Morris
Zach De La Rocha
Audrey Tang
Scott Aaronson
Scott Alexander
Rick Rubin
Nathan Young
KRS-1
Max Tegmark
Rob Miles
Connor Leahy
Samo Burja
George Hotz
Jan Leike
Andrew Huberman
Karl Marx
David Shapiro
Tim Kennedy
Sean Carroll
Roger Penrose
Tim Ferris
Mark Laita
Chris Ramsey
David Grush
Nick Bostrom
Permendes
Pythagoras
David Chalmers
Roman Yampolskiy
Stuart Russell
Brian Armstrong
Nate Soares
Doug Lenat
Geoffrey Hinton
Nathan Labenz
Julian Dorey
Yoshua Bengio
Zvi Mowshowitz
The user @krantz
Dave Smith
David Deutsch
Ilya Sutskever
Hal Putoff
Eric Weinstein
Lex Fridman
Balaji Srinivasan
Daniel Sheehan
Daniel Hillis
Tom Campbell
Curt Jaimungal
Robin Hanson
Andrew Yang
Matthew Pines
Jesse Michels
Danny Jones
Jack Kruse
Ben Goertzle
Jake Barber
David Hume
Immanuel Kant
Ludwig Wittgenstein
Bertrand Russell
Dan Faggela
Robert Monroe
Mike Benz
Chris Bledsoe
Julian Assange
Micheal Levin
Shawn Ryan
Jeffrey Mishlove
Eliezer Yudkowsky
Steven Greer
Harald Malmgren
Noam Chomsky
Matt Beall
Avi Loeb
Robert Epstein
Narendra Modi
Jeffrey Sachs
Gary Nolan
Daryl Cooper
John McAfee
Diogenes
Daryl Davis
David Goggins
Tucker Carlson
Suchir Balaji
Alex Jones
Steven Bonell (Destiny)
Bernie Sanders
Joe Rogan
Dick Cheney
Ezra Klein
Sam Altman
Dario Amodei
Larry Ellison
Dwarkesh Patel
Donald Trump
George Washington
Robert Kennedy Jr
Douglas Murry
Elon Musk
Palmer Luckey
Peter Tiel
Joeseph Farrell
Jeremy Corbell
Amjad Masad
Ammon Hillman
Jordan Peterson
Aaron Rodgers
Sergey Brin
Steve Kwast
Rob Reiner
Eric Prince
Andrew Tate
Alex Karp
Vladimir Putin
97
96
96
96
96
94
94
94
94
94
94
94
94
94
94
94
94
94
93
93
93
93
93
93
93
93
93
93
93
93
92
92
92
92
92
91
91
91
91
91
91
91
91
91
90
90
90
90
89
89
89
89
88
86
86
85
85
85
85
85
85
85
85
84
84
84
84
84
84
84
84
84
84
83
83
83
83
83
83
83
82
82
82
82
82
81
81
81
81
81
81
81
81
80
80
80
79
77
74
73
72
66
66
66
66
66
66
63
59
57
57
53
50
50
50
50
50
50
50
50
50
50
50
50
36
34
20
OptionProbability
Horner will stop being Team Principle first
Verstappen will stop being a Red Bull Racing driver first
They will leave together (same final race)
99
1
0
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
OptionProbability
David Morales
Felix Rodriguez (Max Gomez)
Raphael "Chi Chi" Quintero
Lee Harvey Oswald
51
51
51
43
OptionVotes
YES
NO
1034
956
OptionVotes
NO
YES
188
79