OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
16
14
12
10
8
8
7
6
5
5
5
2
1
1
0
0
0
0
OptionVotes
YES
NO
7748
3856
OptionProbability
Multivariate statistics
Game Theory
Computer-aided Statistical Analysis
Stochastic models
Applied Microeconometrics
Incentives and economic institutions
Auctions and markets
Time series analysis
Behavioral Economics
Introduction to Economics of Information
Monetary theory and monetary policy
Non-parametric statistics
Economic history
Behavioral Finance
Experimental economic research
Industrial economics
Collective Choice
Environmental economics
Labor markets and population economics
Bounded Rationality
Empirical Corporate Finance
Financial and social policy
Business planning
International banking services
Contract theory
Company valuation
Bank management
International Economics
Development Economics
Advanced Corporate Finance
Political Economy
Cost management and cost accounting
Personnel economics
International accounting according to IFRS
Accidental duplicate (Will resolve to 0%)
69
68
62
60
47
41
41
39
32
32
29
27
25
25
24
24
23
23
21
21
21
21
21
19
18
18
17
17
17
16
14
12
11
11
0
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
OptionVotes
YES
NO
1940
824
OptionProbability
Newton
Einstein
Maxwell
Richard P. Feynman
Someone who solves quantum gravity
Fermi
Dirac
Heisenberg
Planck
Schrödinger
Bose
Lorentz
Boltzmann
Pauli
Someone with high achievement in experimental physics
Ibn al-Haytham
Someone to cause a paradigm shift in understanding the physics of experience
Bohr
Poincaré
Tycho Brahe
Johannes Kepler
Someone who convincingly replaces quantum mechanics with a better framework
Stephen Hawking
Ernst Mach
Archimedes
Demokritos (sp)
Shen Kuo
Roger Penrose
93
91
77
67
59
41
41
41
41
39
36
34
34
32
31
30
30
25
25
24
24
21
21
19
15
13
13
9
OptionVotes
YES
NO
1025
976
OptionVotes
YES
NO
1106
929
OptionVotes
YES
NO
1068
937
OptionVotes
YES
NO
205
122
OptionVotes
NO
YES
190
151
OptionProbability
I get my car a very weird paint job which would make it unsellable
Will go to edge Esmeralda (in any capacity, at least one event)
Will get cavity preventing bacteria
Japan JET Reunion 2025
I buy a video camera that can record for 24h+ and record various public places
Pickleball tournament
I use computer vision to run diy studies of politeness behavior, grouped by car brand etc
I start an AI beauty/art twitter and get 1k followers
I document a method of self modifying an SLR by blackening all surfaces
Publish and finish personal)group predictions tracking software
AI phrenology - publish a doc on a study of AI image generation models differential portrayals of previously very hard to measure aspects of the human condition such as facial shape linkages to personality
Build a personal pickleball court and attract community members to play there
Surreptitiously construct additional pickleball courts at Foster City
Publish an edited dashcam video of all the things I've filmed with the Tesla ssd
Tool to view twitter as someone else. Given that follows are public this should be easy
Create a nice Tesla models y fold over screen cover with cutouts for vital areas only. Makes night driving much easier. No need for a distracting map view if you're not turning for 200 miles
I off road hike far into iron point outside Golconda, NV
I set a personal daily mileage record
Make balatro mod which shows currently selected hand score if played (ie everyone's need for this otherwise great game)
Get gphotos to fix any of their innumerable UI bugs by complaining
Have a sci-fi story published online somewhere
Make neighborhood intro video with stories etc
Will go to a natal conference again
Local Free library and bulletin board at my place
Overland trek to CITY in Nevada, an off-limits experimental art city which has been in progress for over 50 years
Go to Mexico to see Trevor Bauer play
Go see the Starship 4 launch in Texas in 2024
Release "Reading the mind in the AI generated images"
Get over 1k users for my browser extension
Ship something involving image => description => more images to users
I go to West Africa and go to concerts, visit Nollywood etc
I write a science fiction novel about aliens who experiment with forced evolution
Browser extension to add metadata on all people's names, relating to their offspring number, success, etc
66
66
66
66
52
52
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
48
48
48