OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
17
12
11
9
8
7
6
6
5
4
4
3
3
2
2
1
1
1
OptionProbability
Sexism and racism, among other forms of prejudice, are responsible for worse health outcomes, and it’s not overly dramatic for people to treat those issues as public health/safety concerns.
Prediction markets are good
[*] ...and things will improve in the future
Tenet (Christopher Nolan film) is underrated
The way quantum mechanics is explained to the lay public is very misleading.
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Scientific racism is bad, actually. (also it's not scientific)
Authoritarian populism is bad actually
Prolonged school closures because of COVID were socially devastating.
Most organized religion are false
Nuclear power is by far the best solution to climate change. [N]
Pineapple pizza tastes good
The Many Worlds Interpretation of quantum mechanics
Humans have a responsibility to figure out what if anything we can do about wildlife suffering.
Physician-assisted suicide should be legal in most countries
First-past-the-post electoral systems are not merely flawed but outright less democratic than proportional or preferential alternatives
Liberal-democracy is good actually
Peeing in the shower is good and everyone should do it
It would actually be a good thing if automation eliminated all jobs.
We need a bigger welfare state than we have now.
Many amphetamines and psychedelics have tremendous therapeutic value when guided by an established practitioner.
The proliferation of microplastics will be viewed as more harmful to the environment than burning fossil fuels, in the long term
Free will doesn't require the ability to do otherwise.
American agents are in the highest positions in government for more than half the world.
We should give every American food stamps, in a fixed dollar amount, with no means testing or work requirements or disqualification for criminal convictions.
Metaculus will take over Manifold in more serious topics, and Manifold will be known as the "unserious" prediction market site
Given what we know about the social and health effects of being fired, even if abolishing at will employment has efficiency costs it is likely worth it.
The overall state of the world is pretty good... [*]
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
Dreams analysis is a legitimate means of gaining personal insight.
Mobile UX will be a key explaining factor in explaining the stories of Manifold and Metaculus.
If a developed nation moves from democratic to authoritarian government today, it should be expected to end up poorer, weaker, sicker, and stupider.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
The United States doesn't need a strong third party.
Political libertarianism
Being a billionaire is morally wrong.
Racial Colorblindness is the only way to defeat racism
People will look back on using animal products as a moral disgrace on the level of chattel slavery.
There's a reasonable chance of a militant green/communist movement that gains popular support in the coming decade.
Eating meat is morally wrong in most cases.
You should bet NO on this option
The Windows kernel is better than Linux; it’s just all the bloat piled on top that makes it worse
[N], and to the extent climate activists are promoting other kinds of solutions, they are actively making the situation worse by diverting attention and resources from nuclear power.
White people are the least racist of any racial group
Technology is not making our lives easier or more fulfilling.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Light mode is unironically better than Dark mode for most websites
God is evil
A sandwich is a type of hot dog
Some people have genuine psychic capabilities
Astrology is a legitimate means of gaining personal insight.
Climate change is significantly more concerning than AI development.
It's acceptable for our systems of punishment to be retributive in part
Mereological nihilism (composite objects don't exist)
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
China not having real democracy does more good than harm
Reincarnation is a real phenomenon
Dentistry is mostly wasted effort.
Moral Hazard isn’t real, and all the purported instances of it can be chalked up to coincidence or confounding variables
Governments should not support parents for having children that they cannot take care of
Donald Trump would have been a better president than Joe Biden
Mass surveillance (security cameras everywhere) has more positives than negatives
Future generations will say that on balance the world reacted appropriately after learning that fossil fuels cause climate change. That the balance between addressing the problem and slowing economies was just about right.
SBF didn't intentionally commit fraud
The next American moon landing will be faked
Humans don't have free will.
AI art is better than human art
Souls/spirits are real and can appear to the living sometimes
Communism just wasn't implemented well, next time it will work
The human race should voluntarily choose to go extinct via nonviolent means (antinatalism).
The first American moon landing was faked
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
Astrology is actually true.
93
91
87
86
80
79
79
79
79
77
77
74
73
72
72
71
71
71
70
67
67
66
65
60
60
59
58
55
51
50
50
50
50
48
47
46
46
45
45
44
44
44
42
41
40
38
36
35
33
33
32
31
30
29
27
26
23
23
22
22
22
21
20
19
14
13
12
11
9
8
8
7
6
5
5
OptionVotes
YES
NO
3323
1983
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
10
8
7
6
6
6
6
6
4
4
4
3
2
2
1
1
1
OptionVotes
YES
NO
279
58
OptionVotes
YES
NO
1259
922
OptionProbability
No: no statistically significant reduction
Yes: 0%-40% reduction
Not tested by question close
Yes: > 40% reduction
53
22
18
7
OptionProbability
Other
Democratic
Republican
No election in 2100 (e.g. because the US ceases exist)
Independent / no party preference
29
19
19
19
13
OptionVotes
YES
NO
183
127
OptionVotes
YES
NO
155
86
OptionVotes
YES
NO
200
76
OptionVotes
YES
NO
105
95

