OptionProbability
The company will be valued at >= $1 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will be valued at >= $10 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
At least one of the founders (Ilya Sutskever, Daniel Gross, Daniel Levy) will leave the company
Zvi will mention the company in a blog post
Zvi will mention the company in a blog post in 2025
The company will be valued at >= $100 Million according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will raise more than $1 billion of capital
Ilya will remain at the company continuously until EOY 2025, or until the company is acquired/ceases to exist
The official SSI X account will have more than 100k followers
The majority of their compute will come from Nvidia GPUs
I will believe the company should have invested more in AI Safety relative to Capabilities at EOY 2025
The company will announce that their path to superintelligence involves self-play/synthetic data
Ilya will discuss the company on a podcast
The company will publish an assessment of the model’s dangerous capabilities (e.g. https://www.anthropic.com/news/frontier-threats-red-teaming-for-ai-safety)
Ilya will give a presentation on research done at the company
The company will finish a training run reported to use more than 10^24 FLOP (e.g. by Epoch AI)
A majority of people believe that the company has been net-positive for the world according to a poll released at EOY 2025
The company will include at least one image on its website
The company will announce that their model scores >= 85 MMLU
The company will announce that their model scores >= 50 GPQA
The company will invite independent researchers/orgs to do evals on their models
The company will finish a training run reported to use more than 10^25 FLOP (e.g. by Epoch AI)
The company will have at least 100 employees
The company will announce that their path to superintelligence involves creating an automated AI researcher
The company will publish a Responsible Scaling Policy or similar document (e.g. OpenAI’s Preparedness Framework)
The company will announce research or models related to automated theorem proving (e.g. https://openai.com/index/generative-language-modeling-for-automated-theorem-proving/)
The company will be on track to build ASI by 2030, according to a Manifold poll conducted at EOY 2025
I will believe at EOY 2025 that the company has significantly advanced AI capabilities
The company will release a publicly available API for an AI model
The company will publish research related specifically to Sparse Autoencoders
The official SSI X account will have more than 200k followers
I will meet an employee of the company in person (currently true for OAI, Anthropic, xAI but not Deepmind)
The company will sell any products or services before EOY 2025
The company will release a new AI or AI safety benchmark (e.g. MMLU, GPQA)
The company will announce that they are on track to develop superintelligence by EOY 2030 or earlier
The company will publish research which involves collaboration with at least 5 members of another leading AI lab (e.g. OAI, GDM, Anthropic, xAI)
The company will have a group of more than 10 people working on Mechanistic Interpretability
The company will release a chatbot or any other AI system which accepts text input
The company will release a model scoring >= 1300 elo in the chatbot arena leaderboard
The company will finish a training run reported to use more than 10^26 FLOP (e.g. by Epoch AI)
The company will open offices outside of the US and Israel
I will believe at EOY 2025 that the company has made significant progress in AI Alignment
I’ll work there (@mr_mino)
The company will announce a commitment to spend at least 20% of their compute on AI Safety/Alignment
The company will be listed as a “Frontier Lab” on https://ailabwatch.org/companies/
The company will be involved in a lawsuit
It will be reported that Nvidia is an investor in the company
The company’s model weights will be leaked/stolen
I will believe at EOY 2025 that the company has built an fully automated AI researcher
The company will make a GAN
The company will announce that their path to superintelligence involves continuous chain of thought
It’s reported that the company’s model scores >= 90 on the ARC-AGI challenge (public or private version)
The company will open source its model weights or training algorithms
It will be reported that a model produced by the company will self-exfiltrate, or attempt to do so
The official SSI X account will have more than 1M followers
The company will be valued at >= $100 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The phrase “Feel the AGI” or “Feel the ASI” will be published somewhere on the company website
The company will be reported to purchase at least $1 Billion in AI hardware, including cloud resources
Leopold Aschenbrenner will join the company
The company will advocate for a AI scaling pause or will endorse such a proposal (e.g. https://futureoflife.org/open-letter/pause-giant-ai-experiments/)
The company will have a public contract with the US government to develop some technology
The company will publish research related to Singular Learning Theory
Major algorithmic secrets (e.g architecture, training methods) will be leaked/stolen
The company will publish research related to Neural Turing Machines
The company’s AI will be involved in an accident which causes at least $10 million in damages
The company will release a model scoring in the top 3 of the chatbot arena leaderboard
The company will publish a research paper written entirely by their AI system
The company release a video generation demo made by their AI system
I will believe at EOY 2025 the company has made significant advances in robotics or manufacturing
Their model will be able to play Chess, Shogi, or Go at least as well as the best human players
There will be a public protest or boycott directed against the company with more than 100 members
The company will be closer to building ASI than any other AI Lab at EOY 2025, as judged by a manifold poll
The company’s model will independently solve an open mathematical conjecture created before 2024
The company will publish a peer-reviewed paper with more than 1000 citations
The company will be acquired by another company
Elon musk will be an investor of the company
The company will release a model that reaches the #1 rank in the Chatbot Arena (including sharing the #1 rank with other models when their confidence intervals overlap)
The company will release an app available on iPhone or android
The company will change its name
The company will be merged with or acquired by another company
The company will announce that they have created Superintelligence
The company will finish a training run reported to use more than 10^28 FLOP (e.g. by Epoch AI)
It will be reported that Sam Altman is an investor in the company
The company will build their own AI chips
Their model will be the first to get a gold medal or equivalent in IMO (International Mathematics Olympiad)
The company will finish a training run reported to use more than 10^29 FLOP (e.g. by Epoch AI)
The company will be reported to build a data center with a peak power consumption of >= 1 GW
The company will publish at least 5 papers in peer reviewed journals
The company will declare bankruptcy
The company will be reported to acquire an Aluminum manufacturing plant for its long term power contract
The company will be publicly traded
The company will finish a training run reported to use more than 10^27 FLOP (e.g. by Epoch AI)
The company will finish a training run reported to use more than 10^30 FLOP (e.g. by Epoch AI)
I'll work there (@AndrewG)
The company will be reported to build a data center with a peak power consumption of >=10 GW
The company will be reported to build a data center with a peak power consumption of >=100 GW
The company will be valued at >= $1 Trillion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will be valued at >= $10 Trillion according to a reputable news source (e.g. Forbes, Reuters, NYT)
100
100
100
100
100
100
100
96
94
85
76
60
58
49
49
45
45
40
39
39
39
39
37
37
37
37
37
34
33
31
31
29
28
25
25
25
24
24
22
22
21
21
19
18
18
18
18
17
16
16
15
13
13
13
13
12
12
11
10
10
10
10
9
9
9
7
7
7
7
7
7
7
7
6
6
6
6
6
6
6
5
5
5
4
4
4
4
4
3
3
3
3
3
2
2
2
1
1
OptionProbability
Gauss
Archimedes
Euler
Other
Alexander Grothendieck
Erdos (on amphetamines)
Von Neumann
Ramanujan
Newton
Kurt Gödel
Terry Tao
David Hilbert
Riemann
Augustin-Louis Cauchy
Pythagoras
Euclid (of Geometry)
Galois (died at 20 fighting for a girl he loved)
Alonzo Chuch (lambda calculus)
Matt Damon (of Good Will Hunting)
Poincare
Finkelstein (of the levi finkelstein conjecture)
Mandelbrot (The B in Benoit B Mandelbrot is Benoit B Mandelbrot)
Trick question; there are no mathematicians.
Idk, your mom seemed pretty good at multiplying last night
sixtynine, you filthy casuals
David A. Cox (Cox-Zucker machine)
The solver of the Riemann Hypothesis
Ludwig Wittgenstein
John Conway (group theory, among others)
the unknown ancient egyptian who invented zero
Descartes
Leibniz
Bourbaki
Laplace
@Mira
Georg Cantor
Frank Ramsey
Fermat
Emmy Noether
Ada Lovelace
.
p
DottedCalculator
GPT8
Claude Shannon
God
Alan Turing
Grigori Perelman
Olga Ladyzhenskaya
Weyl, Weyl
John Gabriel
Michael Atiyah
46
20
16
4
2
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
@hmys does not pay back the loan on time
Levi Finkelstein will have not interacted with Manifold for at least a month at the time HMTS's debt is supposed to be paid back.
Levi forgives the debt in a stunning act of grace
@hmys declares bankruptcy, their assets are liquidated, and @levifinkelstein collects their balance, writing off the difference as an acceptable loss
@jim pays off the debt on @hmys' behalf
@hmys breaks Manifold rules to farm mana to pay off their debt (not including incidents before this was submitted)
AGI renders 2 year concerns meaningless
Manifold devalue mana 1000000000000x, making it easy for hmys to buy mana with USD to pay the debt
The full debt is completely paid off on time, leaving Levi almost two billion mana richer
@Mira waves a wand of debt cancellation, rewriting reality so the debt appears to have never been valid.
93
87
62
40
17
16
6
6
2
1
OptionProbability
A person has a moral right to own a gun
We should be paying individuals to get an education instead of charging them.
GOFAI could scale past machine learning if we used social media strategically to train it.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Other
Some people have genuine psychic capabilities
Eventually, only AI should be sovereign
Hardware buttons are superior to touchscreen buttons in cars
Being a billionaire is morally wrong
The way quantum mechanics is explained to the lay public is very misleading.
It is not possible to multitask
Jeffrey Epstein killed himself (>99.9% certainty)
Reincarnation is a real phenomenon (i.e. it happens, not just a theory)
Physician-assisted suicide should be legal in most countries
Souls/spirits are real and can appear to the living sometimes
OpenAI will claim to have AGI in 3 years.
The punishment of people who do bad things is a regrettable necessity in our current society, not a positive act of justice
There is an active genocide against trans people occuring in red states and it's appalling that people don't seem to care
Climate change is significantly more concerning than AI development
Abusive parents should lose custody of their children
Capitalism has done far more harm than good
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Free will does not exist. We construct narratives after the fact to soothe our belief in rationality.
Violent criminals must be kept apart only because they can’t control themselves. Punishing them further than restricting their freedom is immoral.
Music is a net negative for humanity
Trump orchestrated his own assassination attempt.
Democrats / Liberals are behind Trump’s assassination attempt.
Abortion is morally wrong
jskf's password is ***************
The first American moon landing was faked
There is no Dog
Light mode is unironically better than Dark mode for most websites
Cars should not have sound systems
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
Pet ownership is morally wrong
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
SBF didn't intentionally commit fraud
It should be illegal to own a subwoofer in an apartment building
There are no valid justifications for participating in war, ever
Cascadia should be an independent country
Children should not be raised in nuclear families
The fact that 80% of Manifold's users are men is a problem that speaks to the deep-seated roots of patriarchy and exclusion in STEM
Anarcho-communism is a good idea, and hierarchy is bad
If AI exterminated the human race it might not be a bad thing
Tech bros are really, really annoying
Affirmative action is necessary in modern-day America
@Mira is the pinnacle of billions of years of optimization processes: thermodynamics, evolution, learning, language. The universe was created to cause me - and only me - to come into existence. If I mess up the overseers perturb&restart it.
Pigouvian taxes are great and they should be turned up to 11 to discourage activities with negative externalities [code PROPOSITION PIG]
[PROPOSITION PIG] and this should include a frequent flyer levy
[PROPOSITION PIG] and this should include meat and dairy
We have reached the end of history. Nothing Ever Happens.
[PROPOSITION PIG] and this should include alcohol
SBF was obviously a scammer just because he's a cryptocurrency person. Rationalists were too forgiving of this just because he was giving them money.
Most young Americans would receive more benefit than harm if there were universal military conscription
The people producing fake honey (and sell it as real) are based, because they are actively working to synthesize something people want, even if they scam some people in the process.
Tarot cards are not really able to predict the future but you can learn a lot about someone by doing a reading for someone.
Mac and cheese tastes better with peanut butter mixed in
It would actually be a good thing if automation eliminated all jobs.
Free will doesn't require the ability to do otherwise.
This market probably would have worked better as the new unlinked free response market.
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Prolonged school closures because COVID were socially devastating.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
Scientific racism is bad, actually. (also it's not scientific)
The next American moon landing will be faked
Tenet (Christopher Nolan film) is underrated
We should give childlike sex robots to pedophiles
Having sex with children isn't inherently/necessarily bad
Cars are a societal net negative
Oversized pickup trucks should be illegal in cities
Suburban, single-family housing is immoral.
Gender equality needs technological outsourcing of pregnancy.
21
19
12
6
4
3
3
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Later/Never
2028
2029
2027
2030
2026
2025
2024
88
6
6
4
4
3
1
0
OptionVotes
YES
NO
1759
754
OptionVotes
YES
NO
2629
380
OptionVotes
NO
YES
293
133
OptionVotes
YES
NO
1846
780
OptionVotes
YES
NO
1182
943
OptionVotes
YES
NO
436
104
OptionVotes
YES
NO
225
43