OptionProbability
Elon Musk
Bill Gates
Jeff Bezos
Mr. Beast
Other
Ilya Sutskever
Robert Caro
Barack Obama
Donald Trump
Sam Altman
JD Vance
Jensen Huang
Kamala Harris
Joe Biden
Gwern Branwen
Volodymyr Zelenskyy
roon
pope Francis
Peter Thiel
No one, the tweet was a joke or intended to attract advertisers without referring to someone specific
Sarah Paine
Tim Cook
Satya Nadella
Justin Trudeau
Brian Shaw (_biggest_ guest yet?)
Narendra Modi
Xi Jinping
Dalai Lama
Buffett
Taylor Swift
Lebron James
GPT-5
Bill Clinton
A basketball player
Vladimir Putin
Benjamin Netanyahu
Jesus Christ
Rona Wang
Jose Luis ricon
Kettner Griswold
James Koppel
Sam Bankman-Fried
Nancy Pelosi
Rishi Sunak
Keir Starmer
Satoshi Nakomoto
Jimmy Carter
George W Bush
Al Gore
Michael Jackson
your mom
Mitt Romney
one or both of his parents
A new OpenAI AI model not called "GPT-5"
Connor Duffy
MBS
Geoffrey Hinton
Scott Alexander
The Mountain (Icelandic strongman)
Leonardo DiCaprio
RFK Jr
Sydney Sweeney
growing_daniel
greg16676935420
Deadpool
Terence Tao
Shaq
Oprah Winfrey
Yoshua Bengio
Sundar Pichai
Scarlett Johansson
Paul McCartney
[duplicate]
Neel Nanda
King Charles
Kim Jong Un
Gavin Newsom
Royal Palace
[cancelled option]
Daniel Yergin
Peter Singer
Gabe Newell
Neil Gorsuch
Stephen Breyer
Dmitry Medvedev
JK Rowling
Shrek
Sam Hyde
[invalid answer] Multiple people e.g. a team from OpenAI
Marques Brownlee
Vivek Ramaswamy
Donald Trump Jr.
Ben Shindel
Javier Milei
Dylan Patel
Joe Rogan
Marc Andreessen
Mike Tyson
Jake Paul
Matt Gaetz
21
15
10
9
8
4
4
3
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Gavin Newsom
Other
Alexandria Ocasio-Cortez
Pete Buttigieg
Kamala Harris
Josh Shapiro
Jon Ossoff
J.B. Pritzker
Gretchen Whitmer
Mark Kelly
Andy Beshear
Ruben Gallego
Michelle Obama
Raphael Warnock
Cory Booker
Tim Walz
Wes Moore
Mark Cuban
None
John Fetterman
Hillary Clinton
Joe Biden
Jeff Jackson
Jared Polis
Amy Klobuchar
Mitch Landrieu
Roy Cooper
Dianne Feinstein
Jimmy Carter
Chuck Schumer
Tim Kaine
Bernie Sanders
Tammy Duckworth
Robert F. Kennedy Jr.
Elizabeth Warren
Michael Bloomberg
Dwayne Johnson
[Duplicate - Invalid]
Hakeem Jeffries
Andrew Yang
Steven Kenneth "Destiny" Bonnell II
Dean Phillips
Bob Ferguson
Jay Inslee
Kathy Hochul
Ro Khanna
Anthony Blinken
Susan Rice
Scott Wiener
Donald Trump
Sam Altman
Jeff Merkley
Merrick Garland
Taylor Swift
Tammy Baldwin
Katie Porter
Phil Murphy
Cenk Uygur
Jamie Raskin
Joe Manchin
Sherrod Brown
Chris Murphy
Brian Schatz
John Bel Edwards
Clark Duke
Stacey Abrams
Nina Turner
john smith
Jon Stewart
Duplicate ignore
Gina Raimondo
Ron Desantis
Beto O'Rourke
Vermin Supreme
Hunter Biden
Invalid answer
John Jacob
Alex Jones
Bob Roberston the IV
Marie Gluesenkamp Perez
Stephen A Smith
31
15
10
5
5
5
5
4
3
3
2
2
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Someone solves agent foundations
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
AGI is never built (indefinite global moratorium)
Ethics turns out to be a precondition of superintelligence
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
There’s some cap on the value extractible from the universe and we already got the 20%
Humans become transhuman through other means before AGI happens
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
AI control gets us helpful enough systems without being deadly
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
Something less inscrutable than matrices works fast enough
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Orthogonality Thesis is false.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
We successfully chained God
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
🫸vibealignment🫷
20
20
9
6
5
5
5
5
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
J. D. Vance
Josh Shapiro
Gavin Newsom
Marco Rubio
Pete Buttigieg
J. B. Pritzker
Gretchen Whitmer
Josh Hawley
KRANTZ (the abstract idea that evolves into a decentralized superintelligence, not the user)
Alexandria Ocasio-Cortez
Glenn Youngkin
Jon Ossoff
Jeff Jackson
Ron DeSantis
Wes Moore
Andy Beshear
Nikki Haley
Elise Stefanik
Cory Booker
Spencer Cox
Donald Trump Jr.
Brian Kemp
Amy Klobuchar
Gina Raimondo
Ro Khanna
Chris Murphy
Brian Schatz
Tammy Duckworth
Chris Sununu
Katie Hobbs
Josh Green
Tina Kotek
Mark Cuban
Vivek Ramaswamy
Ted Cruz
Kristi Noem
Raphael Warnock
Jared Polis
Tom Cotton
Joni Ernst
Michael Bennet
Catherine Cortez Masto
Tammy Baldwin
Sarah Huckabee Sanders
Kevin Stitt
Tate Reeves
Tim Walz
Ivanka Trump
Stephen Miller
Erika Kirk
James Donaldson (MrBeast)
Tim Scott
Joe Rogan
Robert F. Kennedy Jr
Andrew Yang
Beto O'Rourke
Mark Kelly
Jay Inslee
Deval Patrick
Eric Swalwell
Wayne Messam
Kirsten Gillibrand
Julian Castro
Dean Phillips
Katie Britt
Laphonza Butler
Eric Schmitt
Mike Lee
Chris Coons
Tim Kaine
Lisa Murkowski
Ruben Gallego
David Hogg
Kamala Harris
Will Hurd
Tulsi Gabbard
Dan Crenshaw
John Fetterman
Mark Zuckerberg
Stephen Curry
Markwayne Mullin
Rand Paul
Maura Healey
Taylor Swift
Steven Kenneth Bonnell II (Destiny)
Tucker Carlson
Matt Gaetz
Marianne Williamson
Ezra Klein
Mike Pence
Mitt Romney
Stephen Colbert
Joe Manchin
Al Gore
DUPLICATE
Dwayne Johnson (The Rock)
Eliezer Yudkowsky
Aella
Scott Alexander
Sam Altman
Zendaya
Michelle Obama
Kanye West
Sarah Palin
Jon Stewart
Ben Shapiro
Bernie Sanders
Hillary Clinton
Elon Musk (Natural-born-citizen clause repealed/bypassed)
Me
Krantz (the user @Krantz)
35
21
19
17
16
15
13
11
10
9
9
9
9
8
8
8
7
7
7
7
7
6
6
6
6
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
OptionProbability
Mainstream Media
Letitia James
January 6th Committee
Volodymyr Zelenskyy
LQBTQ / Trans People
Migrants
The Bidens (Joe, Ashley, Jill, and/or Hunter Biden)
Elon Musk
Anthony Fauci
Gen. Mark Milley
Universities
Paper straws
Blinken
Maine
Due Process
Alejandro Mayorkas
Jimmy Kimmel
Jack Smith
The New York Times (NYT)
Solar Energy
Miles Taylor
Stephen Colbert
John Bolton
James Comey
John Brennan
Zohran Mamdani
National Institutes of Health
Leakers
Panama
JB Pritzker
Wind Energy
Any professional sports league
Alexander Vindman
The Atlantic
Wikipedia
Bruce Springsteen
Denmark
Ann Selzer
John Kelly
Michael Cohen
Seth Myers
Nancy Pelosi
Bill Barr
Vaccine mandates
the National Archives
Any non-Tesla electric car manufacturer
Susan Collins (R-ME)
Lisa Murkowski (R-AK)
Mariann Edgar Budde
Progressive YouTube Influencers
Ken Klippenstein
The American Medical Association
Christopher Steele
Jezebel (website)
The Metropolitan Museum of Art
TIME magazine
Public Libraries
Jeff Bezos
Emmanuel Macron
Narendra Modi
Any Manifold User
Taylor Swift
E. Jean Carroll
Fluoride
Tucker Carlson
Disney
Joe Biden
Beyoncé
The American Psychiatric Association
Steve Wynn
Jen Psaki (former White House Press Secretary and current MSNBC host)
Jeffrey Goldberg (Atlantic columnist)
Brad Raffensperger
WIRED Magazine
Any fashion or cosmetics brand
Liz Cheney
Greenland
American sign language signers or interpreters
BlueSky
JD Vance
The population of penguins and seabirds on the Heard and McDonald Islands
Mike Pence
The San Diego Zoo
Barack Obama
Russell Vought
Pete Hegseth
Democratic Republic of the Congo
SSRIs
Black Nationals
Rosie O'Donnell
Alec Baldwin
Chappell Roan
Bill Burr
Not Elected
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
99
99
99
99
99
99
99
99
94
92
90
83
81
73
70
56
55
54
49
49
48
46
45
44
44
43
43
42
42
40
39
38
36
36
34
34
33
33
28
26
26
25
25
25
23
22
21
21
21
21
20
20
20
20
19
19
18
18
17
16
16
15
15
12
12
11
10
9
9
8
8
7
6
4
2
1
1
0
OptionProbability
Elon Musk
Other
Jensen Huang
Larry Ellison
Jeff Bezos
Mark Zuckerberg
Sam Altman
Bernard Arnault
Larry Page
Satoshi Nakamoto
Vladimir Putin
Sergey Brin
Vitalik Buterin
Donald Trump
Michael Saylor
Bill Gates
Warren Buffett
me
Steve Ballmer
Gautam Adani
Mukesh Ambani
An AGI
Brian Armstrong
Changpeng “CZ” Zhao
Tim Draper
Jack Dorsey
Winklevoss Tyler/Cameron
@Mira
Michael Dell
Amancio Ortega
Carlos Slim Helu
Eric Schmidt
Masayoshi Son
Dave Tepper
Ken Griffin
Eduardo Saverin
Morris Chang
35
28
12
5
3
3
3
2
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Eric Swalwell
Tom Steyer
Katie Porter
Other
Xavier Becerra
Antonio Villaraigosa
Rick Caruso
Ethan Agarwal
Eleni Kounalakis
Steve Hilton
Jon Slavet
Toni Atkins
Tony Thurmond
Betty Yee
Rob Bonta
Kamala Harris
Alex Padilla
Jesse Perez
Michael L. Younger
Ian Calderon
33
16
11
11
5
5
3
3
2
2
2
1
1
1
1
1
1
0
0
0
OptionProbability
Gauss
Euler
Archimedes
@121
Other
Von Neumann
Ramanujan
Alexander Grothendieck
Newton
Kurt Gödel
David Hilbert
Augustin-Louis Cauchy
Pythagoras
Euclid (of Geometry)
Galois (died at 20 fighting for a girl he loved)
Erdos (on amphetamines)
Alonzo Chuch (lambda calculus)
Matt Damon (of Good Will Hunting)
Poincare
Finkelstein (of the levi finkelstein conjecture)
Mandelbrot (The B in Benoit B Mandelbrot is Benoit B Mandelbrot)
Trick question; there are no mathematicians.
Idk, your mom seemed pretty good at multiplying last night
sixtynine, you filthy casuals
David A. Cox (Cox-Zucker machine)
The solver of the Riemann Hypothesis
Ludwig Wittgenstein
John Conway (group theory, among others)
the unknown ancient egyptian who invented zero
Descartes
Leibniz
Bourbaki
Laplace
@Mira
Georg Cantor
Frank Ramsey
Fermat
Emmy Noether
Ada Lovelace
.
p
Terry Tao
DottedCalculator
GPT8
Riemann
Claude Shannon
God
Alan Turing
Grigori Perelman
Olga Ladyzhenskaya
Weyl, Weyl
John Gabriel
Michael Atiyah
54
17
12
3
3
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Leonardo DiCaprio - One Battle After Another
Timothée Chalamet - Marty Supreme
Wagner Moura - The Secret Agent
Michael B. Jordan - Sinners
Ethan Hawke - Blue Moon
Joel Edgarton - Train Dreams
George Clooney - Jay Kelly
Jesse Plemons - Bugonia
Dwayne Johnson - The Smashing Machine
Daniel Day-Lewis - Anemone
Paul Mescal - The History of Sound
Jeremy Allen White - Springsteen: Deliver Me From Nowhere
94
93
82
65
58
40
28
21
20
10
10
9
OptionProbability
Elon Musk
Other
Larry Ellison
Jeff Bezos
Sam Altman
Bernard Arnault
Jensen Huang
Michael Saylor
Larry Page
Sergey Brin
Steve Ballmer
William Ding
Mohammed bin Salman
Vladimir Putin
Michael Dell
@jim
55
29
5
4
2
1
1
1
0
0
0
0
0
0
0
0
OptionProbability
JD Vance
Other
Gavin Newsom
AOC
Josh Shapiro
Pete Buttigieg
Donald Trump
Ron DeSantis
Kamala Harris
Other candidate with Trump as their last name
Cory Booker
Joe Biden
Nikki Haley
Robert F. Kennedy Jr.
Vivek Ramaswamy
Chris Christie
Elizabeth Warren
Bernie Sanders
Other Democrat Politician
Other Republican Politician
Kanye West
Michelle Obama
Hillary Clinton
Michael Bloomberg
Elon Musk
Ivanka Trump
Barack Obama
Tulsi Gabbard
John Fetterman
26
22
17
5
3
3
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
OptionProbability
Josh Johnson
Hasan Minhaj
Desi Lydic
Ronny Chieng
No permanent host by 2030
Stephen Colbert
Other
Michael Kosta
Dulcé Sloan
Troy Iwata
Grace Kuhlenschmidt
Lewis Black
Jordan Klepper
John Leguizamo
Leslie Jones
Roy Wood Jr.
Jon Stewart
Trevor Noah
35
11
10
6
6
6
6
2
2
2
2
2
2
2
2
2
2
1

