OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
19
18
11
8
8
6
5
5
5
5
4
3
1
1
0
0
0
0
OptionProbability
Make the bread taste good
Don't eat anything for at least 48 hours before eating the bread
Stretch-and-fold after mixing, 3x every 30 min
Create indentation, fill with melted cheese and butter
Bake on upside-down sheet pan, covered with Dutch oven
Resolve this option YES while eating the bread
Donate the bread to a food pantry, homeless person, or someone else in need
Use sourdough instead of yeast
Sprinkle 3 grams of flaky sea salt on top of each loaf before the second bake
Watch the video
Autolyse 20 minutes
3 iterations of stretch-and-fold, at any time during the 14h waiting period. Minimum wait time between iterations 1 hour
More steam! Either spritz with more water (preferably hot) or actually pour some boiling water in just before closing the lid.
Make a poolish 12 h ahead: 100 g flour + 100 g water + 0.8 g yeast (0.1 %). After it ferments, use this poolish in place of 100 g flour and 100 g water in the final dough.
Bake it with your best friend.
Add 50g honey
Swap 200ml water for milk
Incorporate a whole grain flour (buckwheat for example)
Bake for an amount of minutes equal to the percent this market answer is at when it comes time to begin baking. (Maintain the ±3 minute tolerances and the 2:1 ratio of time before:after the water spritz.)
Use King Arthur Bread Flour instead of All-Purpose
Decompose it into infinite spheres, then a few parts per sphere, rotate the spheres by arccos(1/3), unite them and you will find 2 chilis (Banach-Tarski)
Let dough rise on counter only until double volume or 2h max, any time longer in fridge
Add lots of butter (0.2 ml per gram)
Use 50% whole grain flour
Ditch current process, do everything the same as the video
Toast the bread
Eat the bread while punching @realDonaldTrump in the face
Eat the bread while watching your mana balance steadily tick to (M)0
Throw the bread at a telescope
Add 50g sugar
Put a baking rack in the Dutch oven before putting the loaf in, raising the loaf off the floor and lofting it over a layer of air.
Replace all water spritz steps with a basting of extra virgin olive oil.
Use flour made from an unconventional grain e.g. barley, millet, oats, rye, sorghum, maize etc.
Assume the chili is not in the interval [0,1], square it for more chili, if it is in (0,1), take the square root, else (equals 0 or 1) add 1 to it.
Assume the chili is in the interval (0,1), square it for less chili, if it is in (1,infinity) take the square root, if it is in (-infinity,0) take the negative of the square of the of the chile, else (equals 0 or 1) subtract 1 from it.
Get your friends to help you make a batch ten times the size, but add a Pepper X (2.7M Scoville heat units) to the mixture
Add 1tsp of diastatic malt powder per 3cps of flour
replace 10% of flour with farina bona
Bake the bread into a fun shape, like a fish, or an octagon
While the bread is baking, tip every user who voted "Yes" on this option 25 Mana
Add 50g vital wheat gluten
Give ChatGPT your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Bread flour, 3x yeast, cut rise to ~3h
Use whole wheat to improve the nutrition of the bread
Add an amount of MSG equivalent to half the current salt content
Place small ice cubes between parchment and pot instead of water
Cook the bread with a rod/puck of aluminum foil (or similar) in the core in an attempt to conduct heat through the center of the bread, cooking it evenly like a doughnut.
Make all of the ingredients from scratch.
Add a pinch of sugar
Make the bread edible then throw it in
Buy bread from a michelin star restaurant.
Increase water by 50 g
Drink vodka while eating the bread
Cover bread with damp paper towel instead of initial water spritz. Rehydrate paper towel during 2nd spritz. Remove paper towel before placing on cooling rack.
Do FOLDED
Quit Manifold into the bread.
Kill the bread into Manifold.
Improve the bread
Start at 500F, drop to 450F and uncover half way through
Grind/powderize all salt used into a fine powder (with pestle & mortar or similar device)
it needs more salt
Add 1/2 cup yogurt to the bread and name the bread “gurt” while addressing it with “yo, gurt”.
Half yeast
Ship a piece of the bread to a random person.
Encourage people to participate in the market in good faith while making the bread
Add 2g? of baking soda
Let dough sit 48 hrs
Resolve this option NO while eating the bread
put butter into it
Mix half sodium/potassium chloride
Add a tablespoon of sugar
Bake for 5 fewer minutes
Bake one more minute
Make naan bread, an easy-to-make bread
Mail the bread to 1600 Pennsylvania Ave. Washington D.C.
Use tap water instead of fancy RO water
Frost it and put sprinkles on it to make it a birthday cake.
Add sawdust to increase the volume of the bread (but only like 10% sawdust by volume max. maybe 20% if it's good sawdust)
Add as many Jack Daniel's whiskey barrel smoking chips as feasible to the Dutch oven before baking, physically separating them from the bread as necessary while baking.
Eat the bread while sending all your mana to @realDonaldTrump
Bake the Manifold Crane into the Bread
Don't eat anything for at least 24 hours before eating the bread
Quadruple salt
Do all the changes in the top 5 open options by probability, excluding this option
Have someone sell the bread to you at an expensive price
Bake one fewer minute
Bake the cake while wearing a onesie.
Bake vegimite into it.
Use lemonade instead of water.
Bake for 5 more minutes
Replace salt with sugar
Eat the bread in front of the White House.
Bake vodka into it
Implement all options that resolved NO
Make the bread inedible then throw it out.
Replace flour with flowers
Throw the bread at @realDonaldTrump
Force Feed it to @realDonaldTrump
Think positive thoughts before tasting
Make the bread great again
Cut the bread into the number of traders in the market slices.
Only buy ingredients from 7/11.
Implementing every element listed below.
Put a non-lethal dose of any rat poison.
Just make donuts instead
Bake it in an easy bake kids oven
Use a plastic baking sheet.
Eat the bread while betting yes on Cuomo on Manifold
Ditch all the steps. Just buy the bread from the supermarket
Double oven temperature
Halve oven temperature
Play classical music while baking
Light it on fire with birthday candles.
Bake it with a microwave
Eat the bread while betting yes on Mamdani on Manifold
Wear a suit while baking the cake.
Bake your social security number into it.
Bring it to Yemen and put a bomb in it
Bake America Great Again
Sacrifice a lamb
Add MAGA and a splash of Trump juice
Bake in a cat and a dog
Explode it:
Take a fat dump in the dough
Sit in dough 24 hrs
Let dough sit 24 hrs
Bake in rectangular tin
double yeast
halve salt
Double salt
Add 2tsp olive oil
Refrigerate dough instead of room temp wait
Do not mix salt and yeast in water together
Put fork in microwave
Don't eat anything for at least 12 hours before eating the bread
Add 2tbsp vanilla extract
Eat the bread with friends
Bake it in the country you were born in.
Eat the bread over the course of a week.
Bake the bread with love
92
90
88
85
85
80
72
70
70
67
66
65
65
64
63
62
62
59
52
52
52
51
51
51
50
50
50
50
50
50
50
50
50
50
50
50
48
47
47
46
42
42
41
41
40
38
37
35
34
34
34
34
34
34
34
34
34
33
32
31
31
28
27
26
26
24
24
24
23
22
20
20
20
19
19
18
18
17
17
17
16
16
15
15
14
14
13
11
11
11
11
10
10
10
10
10
10
10
9
9
9
8
8
8
8
7
6
6
6
6
6
5
5
5
5
4
4
3
3
2
2
2
2
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Magic: The Gathering
Cult of the Lamb
Death's Door
Shift Happens
Baba is You
Understand
The Witness
Hollow Knight
It Takes Two
Unravel Two
Portal 2
Braid
Portal
Superliminal
The Talos Principle
The Talos Principle 2
Antichamber
Perspective
Portal Reloaded
Portal Stories: Mel
Thinking with Time Machine
Cell Machine
Hades
Keep Talking and Nobody Explodes
Superhot
Slay the Princess
Escape Academy
Split Fiction
Don't Starve
The Room
Pico Park
Wargroove
Patrick's Parabox
scarlet hollow
Tokimeki Memorial
Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A Game Inside A
Bokura
The Stanley Parable
Stardew Valley
Mini Motorways
Children of Morta
Detroit: Become Human
We Were Here Too
Splendor (tabletop game)
Castle Crashers
Timberborn
Terraria
Goragoa
Spiritfarer
Minecraft
Beyond: Two Souls
Chants of Sennaar
Patch Quest
We Were Here
Wild Woods
Manifold Garden
We Were Here Forever
We Were Here Expeditions: The FriendShip
Baldur's Gate 3
Hades 2
Clandestine
Nobody Saves the World
Lovers in Dangerous Spacetime
Glider (1988)
Rayman Legends
LittleBigPlanet 3
Human: Fall Flat
Trine 2
Lara Croft and the Guardian of Light
Magicka
Mind over Magnet
Bean and nothingness
Yume Nikki
Space Station 13
999
Nomifactory
Monifactory
Rebirth of the Night
Hyperrogue
Induction
Akane
Zero Time Dilemma
Zero Escape: Virtue's Last Reward
Morrowind
Q.U.B.E. 2
Q.U.B.E.
We Were Here Together
Sackboy: A Big Adventure
Unstable Unicorns 🦄
Mario Party
Exploding kittens
The House of DaVinci
Tunic
Moving Out
Moving Out
Fez
A Couple Of Cubes
Root (tabletop or digital version)
Rainworld
Deadly Rooms of Death
CrossCode
Islands of Insight
Outer Wilds
SteamWorld Dig
Snakebird
Monopoly
Chess
Slay the Spire
Among Us
Stephen's Sausage Roll
Amazing Chicken Adventures
Wingspan
Can of Wormholes
Disco Elysium
Overcooked
Return of the Obra Dinn
Inscryption
Slice and Dice
Teardown
Astroneer
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
88
83
80
80
80
80
76
76
75
73
72
71
70
70
70
68
67
66
66
65
65
63
63
62
62
61
59
59
56
56
54
53
52
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
44
41
40
40
38
35
34
34
34
32
28
28
25
25
25
24
15
15
10
10
5
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
1351
1329
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
OptionVotes
NO
YES
1357
810
OptionProbability
Task-specific robots
Humans
No local supermarket exists / other
Non-humanoid general-purpose robots
Humanoid general-purpose robots
29
22
22
21
7
OptionVotes
NO
YES
233
105
OptionVotes
YES
NO
484
474