OptionProbability
Elon Musk
Bill Gates
Jeff Bezos
Other
Barack Obama
Mr. Beast
Robert Caro
Donald Trump
Kamala Harris
JD Vance
Sam Altman
Gwern Branwen
roon
Geoffrey Hinton
Leonardo DiCaprio
Ilya Sutskever
Jensen Huang
Peter Thiel
No one, the tweet was a joke or intended to attract advertisers without referring to someone specific
Sarah Paine
Satya Nadella
Justin Trudeau
Brian Shaw (_biggest_ guest yet?)
Narendra Modi
Xi Jinping
Dalai Lama
Buffett
Taylor Swift
Joe Biden
Lebron James
GPT-5
Bill Clinton
A basketball player
Vladimir Putin
Benjamin Netanyahu
Jesus Christ
Rona Wang
Jose Luis ricon
Kettner Griswold
James Koppel
Sam Bankman-Fried
Nancy Pelosi
Rishi Sunak
Keir Starmer
Satoshi Nakomoto
Volodymyr Zelenskyy
Jimmy Carter
George W Bush
Al Gore
Michael Jackson
your mom
Mitt Romney
one or both of his parents
A new OpenAI AI model not called "GPT-5"
Connor Duffy
MBS
pope Francis
Scott Alexander
The Mountain (Icelandic strongman)
RFK Jr
Sydney Sweeney
growing_daniel
greg16676935420
Deadpool
Terence Tao
Shaq
Oprah Winfrey
Yoshua Bengio
Sundar Pichai
Scarlett Johansson
Paul McCartney
[duplicate]
Neel Nanda
King Charles
Kim Jong Un
Gavin Newsom
Royal Palace
[cancelled option]
Daniel Yergin
Peter Singer
Gabe Newell
Neil Gorsuch
Stephen Breyer
Dmitry Medvedev
JK Rowling
Shrek
Sam Hyde
[invalid answer] Multiple people e.g. a team from OpenAI
Marques Brownlee
Vivek Ramaswamy
Donald Trump Jr.
Ben Shindel
Javier Milei
Dylan Patel
Joe Rogan
Marc Andreessen
Tim Cook
Mike Tyson
Jake Paul
Matt Gaetz
32
15
12
9
4
3
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
Ethics turns out to be a precondition of superintelligence
Other
Someone solves agent foundations
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Something less inscrutable than matrices works fast enough
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
Orthogonality Thesis is false.
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
There’s some cap on the value extractible from the universe and we already got the 20%
Humans become transhuman through other means before AGI happens
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
AGI is never built (indefinite global moratorium)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
AIs never develop coherent goals
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
We successfully chained God
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
🫸vibealignment🫷
18
13
7
4
4
4
4
4
3
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Red Dead Redemption 2
Clair Obscur: Expedition 33
Halo: The Master Chief Collection
Call of Duty: Black Ops 7
Assassin's Creed Mirage
Assassin's Creed Shadows
Crimson Desert
Microsoft Flight Simulator 2024
Grand Theft Auto VI
Grand Theft Auto IV
Metal Gear Solid Delta: Snake Eater
Tom Clancy's Rainbow Six Siege
Grand Theft Auto V
Tom Clancy's The Division
97
81
72
70
69
69
64
60
50
48
48
43
28
24
OptionVotes
NO
YES
15156
6598
OptionProbability
It won't be available for purchase in 2024
Tracks sleep
It won’t have a USB-C port
will support wireless charging
Improved health tracking if you pair it with an apple watch (ie better data if you wear both than either by itself)
It will only be usable with an iPhone
It will be available in a range of colours
It’ll measure blood oxygen saturation / include a pulse oximetry function.
Will support gestures
At launch, the MSRP will be <$500.
The ring design will be asymmetrical.
Tracks Stress
It will eventually be tied up in litigation like the pulse-ox sensor on the watch
There will be a gold version (real gold, not just gold colored)
battery lasts over a week
It will have a physical button on it
It will have a glass or crystal screen
There will be a higher memory model sold for more money at launch (think 128MB vs. 256MB models)
Waterproof up to 100 meters / 330 feet
It will be called the iRing
It will have a camera
It will be announced in 2025
It will be available for purchase in 2024
It will be announced in 2024
100
97
94
94
85
83
78
76
74
68
59
53
48
47
46
30
21
17
13
9
5
1
0
0
OptionProbability
(Unrecorded)
(Unintelligible)
god bless the united states of america
Make America Great Again
Melania
Never Felt Better In My Life
My Country
MAGA! Please, MAGA!
Biden
Rosebud
Elon / Elon Musk
I'm not dying!
Kamala
Inflation
mars
The doctors tell me I'm dying better than anyone has ever died before. Tremendous dying. The best.
The Grim Reaper just called me. Terrific guy. Says he's never seen someone fight death so strongly. Very impressed.
Hey, make sure that my name appears in the history books as a great person
My legacy, it was the best legacy, it had never been done before
go birds
I know more about the afterlife than anyone. Heaven is already calling me saying 'Sir, we've never seen poll numbers like yours up here.
Karoline, make something up. Something very big
"my white plume ..."
Covfede
"God forgive me"
Polymarket
Vriska did nothing wrong
rationalussy
This Isn't Even My Final Form!
I guess I exercised too much
Joe Biden still survives
We love Elon. Folks, we love him.
54
35
19
16
15
14
13
8
5
5
5
5
4
3
3
3
3
3
2
2
2
2
2
2
2
1
1
1
1
1
1
1
OptionProbability
The Last Unicorn by Peter Beagle
Alcatraz versus the Evil Librarians by Brandon Sanderson
The Amulet of Samarkand by Jonathan Stroud
The Mysterious Benedict Society by Trenton Stewart
The Rithmatist by Brandon Sanderson
Sabriel by Garth Nix
The Adventures of Sherlock Holmes by Arthur Conan Doyle
Mistborn by Brandon Sanderson
The Phantom Tollbooth by Norton Juster
Ender's Game by Orson Scott Card
Holes by Louis Sachar
Animorphs (series) by K. A. Applegate
The Witches by Roald Dahl
Small Gods by Terry Pratchett
Watership Down by Richard Adams
Jurassic Park by Michael Crichton
Have Spacesuit Will Travel by Robert Heinlein
Call of the Wild by Jack London
Magician: Apprentice by Raymond E. Feist
Airborne by Kenneth Oppel
The Dark Lord of Derkholm by Diana Wynn Jones
Something Wicked This Way Comes by Ray Bradbury
The Scarlet Pimpernel by Baroness Orczy
Eagle of the Ninth by Rosemary Sutcliffe
Mr Midshipman Hornblower by C S Forester
Mythology by Edith Hamilton
A Magical Girl Retires by Park Seolyeon
The Paper Menagerie and Other Stories by Ken Liu
Tales of the Unexpected by Roald Dahl
Howl’s Moving Castle by Diana Wynn Jones
Project Hail Mary by Andy Weir
Harry Potter and the methods of rationality by Eliezer Yudkowsky
Andromeda Strain by Michael Chriton
On Basilisk Station (Honor Harrington #1) by David Weber
The Martian by Andy Weir
The Culture series by Iain M Banks
A Wizard of Earthsea by Ursula LeGuin
100
100
100
100
100
100
100
100
100
100
83
82
81
76
72
65
64
62
61
59
59
59
55
50
50
50
50
50
50
49
45
43
36
31
29
29
0
OptionProbability
PC and/or Steam hardware exclusive at launch
Has a portal Easter Egg
Ends in a way suggesting further games in the future
Generally a sequel to the events of HL2
VR supported at launch
Released before April 2026
Costs >$61 USD for base game
Has multiplayer of any kind
Called Half-Life 3
VR only at launch
SteamOS and/or Linux exclusive at launch
Steam hardware exclusive at launch
96
78
67
66
65
61
51
46
32
22
10
8
OptionProbability
Mass AI-driven job displacement event
A government declaration/statement (any country)
Reports about software
Reports about financial activity
Report about sociological observations
Report about economic observations
AI-related academic achievement
Attack on military target
Attack on civilian target
AI-related weapon announcement/use/threat
A statement from an individual (human)
AI-related fake news
AI-related political movement (pro or anti)
A private company declaration/statement (any company)
Discovery of a spy / mole working for foreign power (or for an AI)
Military activity / posture change
An AI-related apocalypse cult attacks a civilian or military target
AI-related mass psychosis/hysteria event
An AI uncovers evidence relating to a previous event
A government of a major nation orders the shutdown of a significant AI service
AI-designed cyberweapon
AI makes prediction about future event
Open source model release
Release of a closed-source/closed-weights model
Reports of aerial devices/machines
Reports of market activity
AI-related corruption scandal
AI-related theft
A new war between nation states
AI-related resignation
Reports of activity in online communities
Reports of activity on internet-connected servers
Reports of activity in open source software
A declaration/statement from a military / intelligence agency
Reports of activity on internet media distribution platforms
People receiving messages (text / phone calls / whatsapp / ...)
AI-related diplomacy
Reports about consumer activity
Reports about identity theft
Reports of activity in religious communities
AI-related media (movie/song/book ...)
Reports about industrial activity
Reports about physical machines
Reports about criminal activity
AI-related competition achievement
A viral meme
An announcement from an AI lab similar to o3
AI-related Internet shutdown in some country
AI-related cyberattack
Conventional weapons attack on AI infrastructure / supply chain
A declaration/statement from an AI (any AI)
AI-related sex scandal
A piece of viral AI-generated media that has a strong/unexpected effect on large numbers of people (~psychological)
AI parasitism / Addictive-Persuasive Agent
Attack on Taiwan
AI-related terrorist attack
AI-related assassination
AI-designed pathogen
AI drug discovery
AI material discovery
Death of an individual in suspicious circumstances
AI-related astronomical event / observation / analysis (can also include satellites)
Reports of underwater activity
Reports of activity on blockchain networks
Reports of activity in academic communities
Publication of a research paper (or pre-print / blog post / poster / twitter thread / ... -- about research-related topic)
Reports of activity in online video games
Reports about people getting scammed
AI-related cult
AI system demonstrates general robotic control
AI causes significant stock market event
Major AGI lab whistleblower revelation
AI-related archaeology
Major AI safety incident or accident
AI system makes scientific breakthrough
AI system solves major unsolved math problem
An AI lab demonstrates an automated AI research engineer
AI-related nuclear weapon use
49
45
41
41
41
41
41
40
40
40
39
38
38
36
35
35
35
34
34
30
30
29
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
28
26
25
25
25
24
21
21
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
18
15
15
15
11
11
11
11
10
OptionProbability
Donald Trump
JD Vance
Mike Johnson
Ted Cruz
Dave McCormick
Rick Scott
Josh Hawley
John Thune
John Cornyn
Rand Paul
Lindsey Graham
50
50
50
50
50
50
50
50
50
50
50
OptionProbability
Someone within the Trump administration
Russia
Ukraine
Another European country
A journalist or private signals intelligence enthusiast
Other
18
16
16
16
16
16
OptionProbability
Every 7-day period, I will use and carry around an Android device capable of making phone calls.
I will use a non-phone smart device more than an Android phone on at least one 7-day period.
I will use an iPhone more than an Android phone on at least one 7-day period.
65
45
17
