OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
Ethics turns out to be a precondition of superintelligence
Other
Someone solves agent foundations
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Something less inscrutable than matrices works fast enough
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
Orthogonality Thesis is false.
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
There’s some cap on the value extractible from the universe and we already got the 20%
Humans become transhuman through other means before AGI happens
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
AGI is never built (indefinite global moratorium)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
AIs never develop coherent goals
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
We successfully chained God
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
🫸vibealignment🫷
18
13
7
4
4
4
4
4
3
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Shooter captured alive before Oct. 1st
Charlie Kirk receives Presidential Medal of Freedom
Evan permabanned from Manifold
Charlie Kirk is the most viewed article on Wikipedia in 2025
Any left-wing group is legally designated as a "terrorist organization"
Jimmy Kimmel is reinstated.
Jimmy Kimmel returns to air on Sinclair stations
Charlie Kirk dies
Charlie Kirk remains no longer alive
government employee fired for personal statements on kirk is reinstated or wins employment/first amendment lawsuit about firing
Over 1 million combined lifetime student memberships and faith initiatives by Nov 2026
South Park does an episode that alludes to it
event with left leaning speaker canceled on a different campus by that school's admin and cancellation explicitly references shooting
Another political assassination (either side)
At least once person is hospitalized or killed in a violent incident started by someone who thinks they are retaliating for Kirk's death
Trump initiates a large-scale crackdown on civil liberties in response.
Any House Democrat is censured for comments on Kirk.
Search warrant/subpoena executed against a journalist citing social media posts about the shooting
South Park episode canceled (not delayed)
BBC does piece on furries that mentions Tyler Robinson
Trump invokes the Insurrection Act and/or declares martial law.
The FCC revokes a station’s broadcast license over any misinformation about Kirk.
Federal bill/EO to restrict/prevent transgender gun ownership
event with right leaning speaker canceled on a different campus by that school's admin and cancellation explicitly references shooting
A prominent anti-Trump journalist is arrested.
"Groyper War 3" declared by Nick Fuentes or one of his generals
Jezebel writer Claire Guinan testifies that she doesn't believe in witchcraft.
Statue of Charlie Kirk erected in the capitol
Information reward money paid
high ranking fed (elected, political appointee, or LEO) acting in an official capacity names a suspect later exonerated
Actblue specifically is labelled a terrorist organization or shut down
Kirk or the assassination referenced explicitly in super bowl ad
arson attack against gender clinic
At least one furry convention canceled
Robinson's roommate charged with any crime related to the assassination
At least 10 people deported because of anti-Charlie Kirk comments
TPUSA supports a gun control bill
Any Democratic politician is arrested
5+ reporters/columnists fired (across any outlets)
Upcoming issues of Absolute Batman pulled
Search warrant/subpoena executed against an individual citing social media posts about the shooting
Federal investigation started into "funders" of "the left"
Any news organization is shut down
Any left-wing NGO is shut down
A member of a prominent left-leaning organization is arrested.
A furry website is hacked and the users are doxxed
80s-style moral panic over furries
Another left-wing group is legally declared a terrorist organization.
NYT reporter / opinion columnist fired
"Jimmy Kimmel Live!" ends, or Jimmy Kimmel is fired/resigns
Seth Meyers fired, or show cancelled.
Increased stigma towards sexual "deviants" in the US
Federal bill named in tribute to Charlie Kirk is signed into law
At least 100 people are arrested on Antifa charges.
Jimmy Fallon fired, or show cancelled.
National Guard deployed to anywhere in Utah (explicitly linked to Kirk)
Kash Patel fired
South Park canceled
Discord or Bluesky removed from the App Store or Play Store
America becomes a one-party GOP state.
5 swatting attacks reported on (not before) 9/12
Trump receives a “rally around the flag” bump in approval ratings (net approval increases by more than 3 points in the following two weeks according to silver bulletin)
100
100
100
100
100
100
100
100
99
90
77
47
37
34
32
28
28
24
24
22
22
20
18
18
17
17
16
15
15
15
15
14
14
12
12
11
11
10
10
10
10
10
9
9
9
9
9
9
8
8
8
8
7
7
6
5
5
5
1
0
0
0
OptionProbability
A person has a moral right to own a gun
We should be paying individuals to get an education instead of charging them.
GOFAI could scale past machine learning if we used social media strategically to train it.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Other
Eventually, only AI should be sovereign
Some people have genuine psychic capabilities
Hardware buttons are superior to touchscreen buttons in cars
Being a billionaire is morally wrong
The way quantum mechanics is explained to the lay public is very misleading.
It is not possible to multitask
Jeffrey Epstein killed himself (>99.9% certainty)
Reincarnation is a real phenomenon (i.e. it happens, not just a theory)
Physician-assisted suicide should be legal in most countries
Souls/spirits are real and can appear to the living sometimes
OpenAI will claim to have AGI in 3 years.
The punishment of people who do bad things is a regrettable necessity in our current society, not a positive act of justice
There is an active genocide against trans people occuring in red states and it's appalling that people don't seem to care
Climate change is significantly more concerning than AI development
Abusive parents should lose custody of their children
Tech bros are really, really annoying
Capitalism has done far more harm than good
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
Free will doesn't require the ability to do otherwise.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
Scientific racism is bad, actually. (also it's not scientific)
Free will does not exist. We construct narratives after the fact to soothe our belief in rationality.
Violent criminals must be kept apart only because they can’t control themselves. Punishing them further than restricting their freedom is immoral.
Music is a net negative for humanity
Trump orchestrated his own assassination attempt.
Democrats / Liberals are behind Trump’s assassination attempt.
Abortion is morally wrong
jskf's password is ***************
The first American moon landing was faked
There is no Dog
Light mode is unironically better than Dark mode for most websites
Cars should not have sound systems
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
Pet ownership is morally wrong
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
SBF didn't intentionally commit fraud
It should be illegal to own a subwoofer in an apartment building
There are no valid justifications for participating in war, ever
Cascadia should be an independent country
Children should not be raised in nuclear families
The fact that 80% of Manifold's users are men is a problem that speaks to the deep-seated roots of patriarchy and exclusion in STEM
Anarcho-communism is a good idea, and hierarchy is bad
If AI exterminated the human race it might not be a bad thing
Affirmative action is necessary in modern-day America
@Mira is the pinnacle of billions of years of optimization processes: thermodynamics, evolution, learning, language. The universe was created to cause me - and only me - to come into existence. If I mess up the overseers perturb&restart it.
Pigouvian taxes are great and they should be turned up to 11 to discourage activities with negative externalities [code PROPOSITION PIG]
[PROPOSITION PIG] and this should include a frequent flyer levy
[PROPOSITION PIG] and this should include meat and dairy
We have reached the end of history. Nothing Ever Happens.
[PROPOSITION PIG] and this should include alcohol
SBF was obviously a scammer just because he's a cryptocurrency person. Rationalists were too forgiving of this just because he was giving them money.
Most young Americans would receive more benefit than harm if there were universal military conscription
The people producing fake honey (and sell it as real) are based, because they are actively working to synthesize something people want, even if they scam some people in the process.
Tarot cards are not really able to predict the future but you can learn a lot about someone by doing a reading for someone.
Mac and cheese tastes better with peanut butter mixed in
It would actually be a good thing if automation eliminated all jobs.
This market probably would have worked better as the new unlinked free response market.
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Prolonged school closures because COVID were socially devastating.
The next American moon landing will be faked
Tenet (Christopher Nolan film) is underrated
We should give childlike sex robots to pedophiles
Having sex with children isn't inherently/necessarily bad
Cars are a societal net negative
Oversized pickup trucks should be illegal in cities
Suburban, single-family housing is immoral.
Gender equality needs technological outsourcing of pregnancy.
18
17
11
7
4
3
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Sexism and racism, among other forms of prejudice, are responsible for worse health outcomes, and it’s not overly dramatic for people to treat those issues as public health/safety concerns.
Prediction markets are good
Tenet (Christopher Nolan film) is underrated
[*] ...and things will improve in the future
The way quantum mechanics is explained to the lay public is very misleading.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Authoritarian populism is bad actually
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Scientific racism is bad, actually. (also it's not scientific)
Prolonged school closures because of COVID were socially devastating.
Nuclear power is by far the best solution to climate change. [N]
Most organized religion are false
The Many Worlds Interpretation of quantum mechanics
Humans have a responsibility to figure out what if anything we can do about wildlife suffering.
Pineapple pizza tastes good
Physician-assisted suicide should be legal in most countries
First-past-the-post electoral systems are not merely flawed but outright less democratic than proportional or preferential alternatives
Liberal-democracy is good actually
Peeing in the shower is good and everyone should do it
It would actually be a good thing if automation eliminated all jobs.
We need a bigger welfare state than we have now.
Many amphetamines and psychedelics have tremendous therapeutic value when guided by an established practitioner.
The proliferation of microplastics will be viewed as more harmful to the environment than burning fossil fuels, in the long term
Free will doesn't require the ability to do otherwise.
American agents are in the highest positions in government for more than half the world.
We should give every American food stamps, in a fixed dollar amount, with no means testing or work requirements or disqualification for criminal convictions.
Metaculus will take over Manifold in more serious topics, and Manifold will be known as the "unserious" prediction market site
Given what we know about the social and health effects of being fired, even if abolishing at will employment has efficiency costs it is likely worth it.
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
Dreams analysis is a legitimate means of gaining personal insight.
Mobile UX will be a key explaining factor in explaining the stories of Manifold and Metaculus.
The overall state of the world is pretty good... [*]
If a developed nation moves from democratic to authoritarian government today, it should be expected to end up poorer, weaker, sicker, and stupider.
California is wildly overrated.
Factory farming is horrific but it is not wrong to eat meat.
The United States doesn't need a strong third party.
Political libertarianism
Racial Colorblindness is the only way to defeat racism
People will look back on using animal products as a moral disgrace on the level of chattel slavery.
There's a reasonable chance of a militant green/communist movement that gains popular support in the coming decade.
[N], and to the extent climate activists are promoting other kinds of solutions, they are actively making the situation worse by diverting attention and resources from nuclear power.
Being a billionaire is morally wrong.
Eating meat is morally wrong in most cases.
You should bet NO on this option
The Windows kernel is better than Linux; it’s just all the bloat piled on top that makes it worse
White people are the least racist of any racial group
Technology is not making our lives easier or more fulfilling.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Light mode is unironically better than Dark mode for most websites
Some people have genuine psychic capabilities
God is evil
A sandwich is a type of hot dog
Astrology is a legitimate means of gaining personal insight.
Climate change is significantly more concerning than AI development.
China not having real democracy does more good than harm
It's acceptable for our systems of punishment to be retributive in part
Mereological nihilism (composite objects don't exist)
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
Reincarnation is a real phenomenon
Dentistry is mostly wasted effort.
Moral Hazard isn’t real, and all the purported instances of it can be chalked up to coincidence or confounding variables
Governments should not support parents for having children that they cannot take care of
Donald Trump would have been a better president than Joe Biden
Mass surveillance (security cameras everywhere) has more positives than negatives
Future generations will say that on balance the world reacted appropriately after learning that fossil fuels cause climate change. That the balance between addressing the problem and slowing economies was just about right.
The next American moon landing will be faked
SBF didn't intentionally commit fraud
Humans don't have free will.
AI art is better than human art
Souls/spirits are real and can appear to the living sometimes
Communism just wasn't implemented well, next time it will work
The first American moon landing was faked
The human race should voluntarily choose to go extinct via nonviolent means (antinatalism).
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
Astrology is actually true.
91
91
86
85
80
79
79
78
78
76
74
74
72
72
72
71
71
71
70
67
67
65
65
60
60
59
58
55
54
51
50
50
50
49
46
46
46
45
44
44
44
44
44
42
41
38
36
35
33
33
33
32
30
29
29
27
26
23
22
22
22
21
21
19
14
13
13
11
9
8
8
7
7
5
5
OptionProbability
RV events
Police as crime solvers
Prison as corrective
Libertarian free will
Working gives us meaning.
LessWrong is a source of pure wisdom
IQ as an objective measure of general intelligence
Merit/desert
Neoliberalism
Proof School
Having kids
LSD unlocks a deeper consciousness
Rule a Country (discord server)
The meaning of life
Religion
The attitude that many of us tend to take way too many things way too seriously (serious submission)
Peer reviewed studies - lots of junk routinely passes peer review, replication/post publication peer review are much better indicators of quality of research
Quantum computing
Alcohol is needed for life to be fun
Death
Cryptocurrency
Pessimism about the future of the world
Founders/Entrepreneurs
meta-irony
On Manifold, mana inflation
Humans being innately (as opposed to instrumentally) more important than other animals
Free will in general
An apple a day
Clinical Psychology
Nihilism
meta-submissions
Children as planetary burden
focus on X-risks within EA as opposed to current issues
Nuclear fusion as the energy of the future
Anti-capitalism
Ockham’s razor
Free Trade
Deontological ethics
Analytic philosophy
Continental philosophy
Utilitarianism
Modernity
Capitalism
Freedom
The analytic-synthetic distinction
Veganism
Technological progress as a good thing
Moral relativism
Classical literature
Therapy
Being in shape
Rationality
Effective Altruism
Artificial Intelligence
Longevity/immortality
AI Doomerism
Politics
Free Speech
The opinion that neoliberalism is overrated
The anti- AI art movement
Extended Travelling (as opposed to <4 week vacation)
Donald Trump being the worst thing for the US and the world
Enlightenment
Taking things seriously
Unconditional love
Reason
Environmentalism for its own sake (rather than for humans' sake)
Psychedelics
Atheism
Russia = Evil, Ukraine = Good
Privacy
Living
Prediction markets
democracy
Biodiversity
Virtue
Money as corrupting force in politics
91
90
87
87
85
84
82
81
80
79
78
78
76
75
75
75
75
74
74
73
72
71
70
70
70
65
65
65
64
63
62
61
61
59
56
55
52
51
50
50
49
49
48
48
47
47
46
45
45
43
42
41
41
41
40
40
37
36
35
34
34
32
30
30
28
28
27
27
26
25
23
23
22
22
21
20
15
OptionProbability
Aliashkhab Khizriev
Farid Basharat
Khamzat Chimaev
Shavkat Rakhmonov
Ilia Topuria
Nurullo Aliev
Movsar Evloev
Azamat Murzakanov
Rafael Ramos Estevam
Andre Lima
Mario Pinto
Lerone Murphy
Jacobe Smith
Michael Morales
Torrez Finney
Hyun Sung Park
88
87
85
84
84
84
83
81
81
78
78
77
76
74
0
0
OptionProbability
#144 – Athena Aktipis on why cancer is actually one of the fundamental phenomena in our universe
#145 – Christopher Brown on why slavery abolition wasn't inevitable
#146 – Robert Long on why large language models like GPT (probably) aren't conscious
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals
#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't
#149 – Tim LeBon on how altruistic perfectionism is self-defeating
#151 – Ajeya Cotra on accidentally teaching AI models to deceive us
#150 – Tom Davidson on how quickly AI could transform the world
#152 – Joe Carlsmith on navigating serious philosophical confusion
#153 – Elie Hassenfeld on two big picture critiques of GiveWell's approach, and six lessons from their recent work
#154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters
#155 – Lennart Heim on the compute governance era and what has to come after
#156 – Markus Anderljung on how to regulate cutting-edge AI models
#157 – Ezra Klein on existential risk from AI and what DC could do about it
#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk
#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less
#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI
#163 – Toby Ord on the perils of maximising the good that you do
#166 – Tantum Collins on what he's learned as an AI policy insider at the White House, DeepMind and elsewhere
#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption
#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion
#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives
#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe
#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels
#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down
#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures
#172 – Bryan Caplan on why you should stop reading the news
#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers
#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child
#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models
99
99
50
39
33
29
29
24
18
18
18
17
17
17
17
17
17
17
16
16
16
16
16
15
15
14
14
14
14
14
14
14
14
OptionVotes
NO
YES
997
978
OptionVotes
NO
YES
1069
931
OptionVotes
YES
NO
1515
660
OptionVotes
NO
YES
1154
929
OptionProbability
More than one AGI individual exists at the time the public finds out that AGI exists.
AGI attempts to make humans more like AGI.
If more than one AGI individual exists, they are generally friendly to each other.
AGI generally has a sense of morality that humans can comprehend.
AGI is generally friendly to humans.
AGI generally views humans as equal in moral worth to AGI.
AGI is generally under the control of humans.
AGI generally avoids interfering with humanity.
Humans are generally under the control of AGI.
Humans are generally friendly to AGI.
AGI has a significant beneficial effect on Earth's climate.
AGI has a significant detrimental effect on Earth's climate.
83
66
60
59
54
54
52
50
48
40
39
14
