OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Someone solves agent foundations
Ethics turns out to be a precondition of superintelligence
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Orthogonality Thesis is false.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Humans become transhuman through other means before AGI happens
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
AGI is never built (indefinite global moratorium)
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
We successfully chained God
29
11
10
7
6
6
4
3
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
For at least one day, the model was generally available to anyone in the United States willing to pay enough without waiting lists or "beta" programs
It is discussed during a segment of "HatGPT"
By default, the generated videos will be watermarked
A competing model has challenged Sora's dominance in the text-to-video space
A poll of Manifold users will say that 20% or less have made a Sora video in the last month
OpenAI will be sued over the model
It was trained on data created in a physics/game engine (eg Unreal Engine)
A post claiming Sora video is real will go viral with > 1 million engagements
It will be noticeably worse at or largely unable to generate fast-paced animation
A second major version of the model has been released
A video produced by the model has been used for widely spread misinformation, as reported by a major news outlet
It has been referenced in a legal case about deepfakes
costs for an average 1 minute HD (or higher quality) video will be lower than $0.50
A major studio will use this in a movie or tv show
It can be used during conversations with ChatGPT on the OpenAI website
It prompts the Hard Fork Podcast to rant about AI model names
Video will include sound
A youtube video made only with Sora will get > 100M views
It will be jailbroken to make a porn video
It will be pay-per-use (or credit based) as opposed to as part of a monthly subscription
it'll be legaly banned in at least one EU country
A youtube movie >2h will be made with only Sora and splicing videos together will get > 10M views
At least 2 Manifold questions will contain a Sora-generated video in their header
Sora will be part of GPT model
It will be the most popular text to video tool (determined by google search trends)
Public access was revoked after release, even if it is later restored
There will be a new monthly subscription that includes sora and dalle for creatives
It has been integrated as a feature on a major social media platform
It can generate videos over 10 minutes long
It will be the SOTA for text to video
It has a logo separate from the OpenAI logo
It has been renamed
It will be free to use
A poll of Manifold users will say that 30% or more have created a Sora video in the last month
The model has had a non-trivial effect on the everday life of the average American, as judged by @Bayesian
It can create a fully coherent short film from a prompt (20-40 minutes)
The Sora line of models proves to be useful for purposes where the video is secondary, such as research into physics, medicine, and weather
A third major version of the model has been released
OpenAI will release the number of model parameters
Full description of model architecture will be public
Eliezer Yudkowsky states or implies that future versions of the Sora line of models - specifically, by name - are an existential threat to civilization
The Sora line of models are being used as simulators for legal investigations - including but not limited to predicting events leading to accidents and crimes
A version of the model was the cause of the YES resolution of the "Weak AGI" Metaculus market
OpenAI will lose a lawsuit over the model
Eliezer Yudkowsky has stated or implied that the current version or an obsolete version of the model poses or had posed an existential threat to civilization
A nyt bestselling author will release their own bestseller movie/tv adaptation using sora
It was accessible to the public before May 2024
98
91
89
86
86
80
78
74
73
71
60
58
58
54
54
51
50
50
48
46
37
37
36
32
31
31
29
28
26
21
20
18
13
10
10
9
9
8
8
7
6
5
4
3
3
2
0
OptionProbability
The company will be valued at >= $1 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will be valued at >= $10 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
At least one of the founders (Ilya Sutskever, Daniel Gross, Daniel Levy) will leave the company
Zvi will mention the company in a blog post
Zvi will mention the company in a blog post in 2025
The company will be valued at >= $100 Million according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will raise more than $1 billion of capital
Ilya will remain at the company continuously until EOY 2025, or until the company is acquired/ceases to exist
The official SSI X account will have more than 100k followers
The majority of their compute will come from Nvidia GPUs
I will believe the company should have invested more in AI Safety relative to Capabilities at EOY 2025
Ilya will discuss the company on a podcast
The company will announce that their path to superintelligence involves self-play/synthetic data
The company will publish an assessment of the model’s dangerous capabilities (e.g. https://www.anthropic.com/news/frontier-threats-red-teaming-for-ai-safety)
The company will finish a training run reported to use more than 10^24 FLOP (e.g. by Epoch AI)
A majority of people believe that the company has been net-positive for the world according to a poll released at EOY 2025
Ilya will give a presentation on research done at the company
The company will include at least one image on its website
The company will announce that their model scores >= 85 MMLU
The company will announce that their model scores >= 50 GPQA
The company will invite independent researchers/orgs to do evals on their models
The company will finish a training run reported to use more than 10^25 FLOP (e.g. by Epoch AI)
The company will have at least 100 employees
The company will announce that their path to superintelligence involves creating an automated AI researcher
The company will announce research or models related to automated theorem proving (e.g. https://openai.com/index/generative-language-modeling-for-automated-theorem-proving/)
The company will be on track to build ASI by 2030, according to a Manifold poll conducted at EOY 2025
I will believe at EOY 2025 that the company has significantly advanced AI capabilities
The company will release a publicly available API for an AI model
The company will publish a Responsible Scaling Policy or similar document (e.g. OpenAI’s Preparedness Framework)
The company will publish research related specifically to Sparse Autoencoders
The official SSI X account will have more than 200k followers
I will meet an employee of the company in person (currently true for OAI, Anthropic, xAI but not Deepmind)
The company will sell any products or services before EOY 2025
The company will release a new AI or AI safety benchmark (e.g. MMLU, GPQA)
The company will announce that they are on track to develop superintelligence by EOY 2030 or earlier
The company will publish research which involves collaboration with at least 5 members of another leading AI lab (e.g. OAI, GDM, Anthropic, xAI)
The company will have a group of more than 10 people working on Mechanistic Interpretability
The company will release a chatbot or any other AI system which accepts text input
The company will release a model scoring >= 1300 elo in the chatbot arena leaderboard
The company will finish a training run reported to use more than 10^26 FLOP (e.g. by Epoch AI)
The company will open offices outside of the US and Israel
I will believe at EOY 2025 that the company has made significant progress in AI Alignment
I’ll work there (@mr_mino)
The company will announce a commitment to spend at least 20% of their compute on AI Safety/Alignment
The company will be listed as a “Frontier Lab” on https://ailabwatch.org/companies/
The company will be involved in a lawsuit
It will be reported that Nvidia is an investor in the company
The company’s model weights will be leaked/stolen
I will believe at EOY 2025 that the company has built an fully automated AI researcher
The company will make a GAN
The company will announce that their path to superintelligence involves continuous chain of thought
It’s reported that the company’s model scores >= 90 on the ARC-AGI challenge (public or private version)
The company will open source its model weights or training algorithms
It will be reported that a model produced by the company will self-exfiltrate, or attempt to do so
The official SSI X account will have more than 1M followers
The company will be valued at >= $100 Billion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The phrase “Feel the AGI” or “Feel the ASI” will be published somewhere on the company website
The company will be reported to purchase at least $1 Billion in AI hardware, including cloud resources
Leopold Aschenbrenner will join the company
The company will advocate for a AI scaling pause or will endorse such a proposal (e.g. https://futureoflife.org/open-letter/pause-giant-ai-experiments/)
The company will have a public contract with the US government to develop some technology
The company will publish research related to Singular Learning Theory
Major algorithmic secrets (e.g architecture, training methods) will be leaked/stolen
The company will publish research related to Neural Turing Machines
The company’s AI will be involved in an accident which causes at least $10 million in damages
The company will release a model scoring in the top 3 of the chatbot arena leaderboard
The company will publish a research paper written entirely by their AI system
The company release a video generation demo made by their AI system
I will believe at EOY 2025 the company has made significant advances in robotics or manufacturing
Their model will be able to play Chess, Shogi, or Go at least as well as the best human players
There will be a public protest or boycott directed against the company with more than 100 members
The company will be closer to building ASI than any other AI Lab at EOY 2025, as judged by a manifold poll
The company’s model will independently solve an open mathematical conjecture created before 2024
The company will publish a peer-reviewed paper with more than 1000 citations
The company will be acquired by another company
Elon musk will be an investor of the company
The company will release a model that reaches the #1 rank in the Chatbot Arena (including sharing the #1 rank with other models when their confidence intervals overlap)
The company will release an app available on iPhone or android
The company will change its name
The company will be merged with or acquired by another company
The company will announce that they have created Superintelligence
The company will finish a training run reported to use more than 10^28 FLOP (e.g. by Epoch AI)
It will be reported that Sam Altman is an investor in the company
The company will build their own AI chips
Their model will be the first to get a gold medal or equivalent in IMO (International Mathematics Olympiad)
The company will finish a training run reported to use more than 10^29 FLOP (e.g. by Epoch AI)
The company will be reported to build a data center with a peak power consumption of >= 1 GW
The company will publish at least 5 papers in peer reviewed journals
The company will declare bankruptcy
The company will be reported to acquire an Aluminum manufacturing plant for its long term power contract
The company will be publicly traded
The company will finish a training run reported to use more than 10^27 FLOP (e.g. by Epoch AI)
The company will finish a training run reported to use more than 10^30 FLOP (e.g. by Epoch AI)
I'll work there (@AndrewG)
The company will be reported to build a data center with a peak power consumption of >=10 GW
The company will be reported to build a data center with a peak power consumption of >=100 GW
The company will be valued at >= $1 Trillion according to a reputable news source (e.g. Forbes, Reuters, NYT)
The company will be valued at >= $10 Trillion according to a reputable news source (e.g. Forbes, Reuters, NYT)
100
100
100
100
100
100
100
96
94
85
76
58
54
49
45
45
43
40
39
39
39
39
37
37
37
37
34
33
33
31
31
29
28
25
25
25
24
24
22
22
21
21
19
18
18
18
18
17
16
16
15
13
13
13
13
12
12
11
10
10
10
10
9
9
9
7
7
7
7
7
7
7
7
6
6
6
6
6
6
6
5
5
5
4
4
4
4
4
3
3
3
3
3
2
2
2
1
1
OptionProbability
The case is fully resolved somehow, including all appeals, by the end of 2026.
The case is fully resolved somehow, including all appeals, by the end of 2025.
Disney sues OpenAI for copyright infringement before the NYT case concludes.
NYT and OpenAI announce a partnership that goes beyond access to training data, such as the one OpenAI made with Politico.
The case settles for more than $10 million before a verdict is reached.
The case settles before a verdict is reached.
New York Times wins at least $1 in damages via a verdict.
The case is fully resolved somehow, including all appeals, by the end of 2024.
The case appears before the Supreme Court.
Nintendo sues OpenAI for copyright infringement before the NYT case concludes.
OpenAI wins via an outright verdict in their favor.
The case settles for more than $100 million before a verdict is reached.
New York Times wins at least $10 million in damages via a verdict.
New York Times wins at least $100 million in damages via a verdict.
The Supreme Court rules in favor of The New York Times and upholds damages and compensation with a net present value of $10 million or more, but does not order GPT-4 deleted.
New York Times wins at least $1 billion in damages via a verdict.
New York Times wins a verdict ordering GPT-4 to be deleted.
The Supreme Court orders GPT-4 to be deleted, or OpenAI otherwise agrees to delete GPT-4 on the basis of this case.
New York Times wins a verdict ordering GPT-4 to be deleted from the Supreme Court.
81
67
59
54
53
44
40
34
28
27
17
14
13
12
9
5
4
4
3
OptionProbability
trump is impeached by either house OR senate
Bitcoin reaches 200K usd or more
a second cybertruck explodes (intended or unintended) that makes the news
Tom Scott's 'this video' reaches 80M views on Youtube
this market reaches 100 traders
undersea cables reported cut around taiwan
EOD Boxing Day - Dec 26
this market reaches 5k individual TRADES
English Wikipedia reaches 70M PAGES or more
Saw XI Releases in USA
alan greenspan passes away
coup in an african country
noam chomsky passes away
chinese spy balloon incident reported on news
openai loses another board member, or sam altman no longer ceo
another trump assassination attempt
discord IPO happens
2025 nobel peace prize winner announced
Hades 3 announced (game)
EOD Thanksgiving - Nov 28
Last game of the MLB World Series ends
israel opens an embassy in syria, OR announces it will
the "500 poll" reaches its target goal of 500 responses
Taylor Swift announce engagement or marriage
zootopia 2 releases
Spacex launches 150th rocket of the year
EOD Halloween - Oct 31
First game of the MLB World Series starts
manifold raises more money
EOD Lief Erikson Day - Oct 9
Ark Survival Evolved 2 releases
MLB rookie of the year announced
Tom Scott's 'this video' reaches 75M views on Youtube
Bitcoin reaches 150K usd or more
stripe ipo happens
Legally Blonde 3 release date announced
GenoSamuel releases Chris Chan History #86
Twitter releases a Peer to Peer payment system to free or premium users
Cy Young award winner announced
Third dune movie officially announced
trump removes a cabinet member
windows 12 announcement is made
the third Atlantic hurricane of the season
Skate 4 releases
someone reaches 100k traders on creator leaderboard
onepieceexplained reaches 15k subs on youtube
Skibidi Toilet ends their original series
trump starts mass deportations
Imu face reveal in One Piece manga
Bitcoin reaches 125K usd or more
Prong.Studio releases a 3rd product (not an accessory or part for an existing one)
Spacex launches 100th rocket in one year
Chat GPT 5 releases to the general userbase
the second Atlantic hurricane of the season
Sailing releases as a skill in Old School Runescape
the first Atlantic hurricane of the season
Spider-Man: Beyond the Spider-Verse release date announced
Earthquake magnitude 8.0 or higher somewhere in the world
Earthquake magnitude 7.8 or higher somewhere in the world
Killing Floor 3 releases on Steam
grok4 release date
the start of Amazon Prime Day(s) 2025
EOD Fourth of July - Jul 4
the third Pacific hurricane of the season
28 years Later releases in USA
the second Pacific hurricane of the season
chime IPO happens
Spacex launches 75th rocket of the year
First Apple Event of the year
the first Pacific hurricane of the season
manifest 2025 ends
Manifest 2025 starts
Mr Beast hits 400M Youtube Subscribers
Bitcoin reaches 110K usd or more
English Wikipedia reaches 7M ARTICLES or more
claude 4 sonnet releases (or later version)
EOD Cinco De Mayo - May 5
Spacex launches 50th rocket of the year
Last day of the NFL draft
Llama 4 released to the general userbase
Joseph Anderson releases long awaited Witcher 3 video
south korean president removed from power
the first Solar eclipse of the year
First Nintendo direct of the year
trump declares war or orders military actions on another country
Ukraine and Russia announce any ceasefire
EOD Ides of March - Mar 15
the first Lunar eclipse of the year
EOD Fat Tuesday/Mardi Gras
trump enacts new or changed tariffs on mexico
new iphone releases in the USA (official date)
Spacex launches 25th rocket of the year
new iphone release date announced (in the USA)
grok 3 release date
nintendo switch successor announced officially
trump enacts new or changed tariffs on china
CGP Grey releases a new video (not a reupload)
doomsday clock announcement
USA President issues 10th executive order
USA President issues 1st executive order
Israel and Hamas announce another temporary ceasefire OR permanent ceasefire OR conflict otherwise ends
this market reaches 1k individual TRADES
98
94
86
86
86
83
81
75
74
72
72
72
71
69
69
69
69
67
66
65
65
64
63
63
63
63
61
61
60
59
59
59
58
57
56
55
55
55
54
54
53
53
52
51
51
51
50
50
50
50
50
50
49
49
46
46
45
44
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
OptionProbability
Make the bread taste good
Don't eat anything for at least 48 hours before eating the bread
Stretch-and-fold after mixing, 3x every 30 min
Create indentation, fill with melted cheese and butter
Bake on upside-down sheet pan, covered with Dutch oven
Resolve this option YES while eating the bread
Donate the bread to a food pantry, homeless person, or someone else in need
Use sourdough instead of yeast
Sprinkle 3 grams of flaky sea salt on top of each loaf before the second bake
Watch the video
Autolyse 20 minutes
3 iterations of stretch-and-fold, at any time during the 14h waiting period. Minimum wait time between iterations 1 hour
Make a poolish 12 h ahead: 100 g flour + 100 g water + 0.8 g yeast (0.1 %). After it ferments, use this poolish in place of 100 g flour and 100 g water in the final dough.
Bake it with your best friend.
Add 50g honey
Swap 200ml water for milk
Incorporate a whole grain flour (buckwheat for example)
More steam! Either spritz with more water (preferably hot) or actually pour some boiling water in just before closing the lid.
Bake for an amount of minutes equal to the percent this market answer is at when it comes time to begin baking. (Maintain the ±3 minute tolerances and the 2:1 ratio of time before:after the water spritz.)
Use King Arthur Bread Flour instead of All-Purpose
Decompose it into infinite spheres, then a few parts per sphere, rotate the spheres by arccos(1/3), unite them and you will find 2 chilis (Banach-Tarski)
Let dough rise on counter only until double volume or 2h max, any time longer in fridge
Add lots of butter (0.2 ml per gram)
Use 50% whole grain flour
Ditch current process, do everything the same as the video
Toast the bread
Eat the bread while punching @realDonaldTrump in the face
Eat the bread while watching your mana balance steadily tick to (M)0
Throw the bread at a telescope
Add 50g sugar
Put a baking rack in the Dutch oven before putting the loaf in, raising the loaf off the floor and lofting it over a layer of air.
Replace all water spritz steps with a basting of extra virgin olive oil.
Use flour made from an unconventional grain e.g. barley, millet, oats, rye, sorghum, maize etc.
Assume the chili is not in the interval [0,1], square it for more chili, if it is in (0,1), take the square root, else (equals 0 or 1) add 1 to it.
Assume the chili is in the interval (0,1), square it for less chili, if it is in (1,infinity) take the square root, if it is in (-infinity,0) take the negative of the square of the of the chile, else (equals 0 or 1) subtract 1 from it.
Get your friends to help you make a batch ten times the size, but add a Pepper X (2.7M Scoville heat units) to the mixture
Add 1tsp of diastatic malt powder per 3cps of flour
replace 10% of flour with farina bona
Bake the bread into a fun shape, like a fish, or an octagon
While the bread is baking, tip every user who voted "Yes" on this option 25 Mana
Add 50g vital wheat gluten
Give ChatGPT your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Bread flour, 3x yeast, cut rise to ~3h
Use whole wheat to improve the nutrition of the bread
Add an amount of MSG equivalent to half the current salt content
Place small ice cubes between parchment and pot instead of water
Cook the bread with a rod/puck of aluminum foil (or similar) in the core in an attempt to conduct heat through the center of the bread, cooking it evenly like a doughnut.
Make all of the ingredients from scratch.
Add a pinch of sugar
Make the bread edible then throw it in
Buy bread from a michelin star restaurant.
Increase water by 50 g
Drink vodka while eating the bread
Cover bread with damp paper towel instead of initial water spritz. Rehydrate paper towel during 2nd spritz. Remove paper towel before placing on cooling rack.
Do FOLDED
Quit Manifold into the bread.
Kill the bread into Manifold.
Improve the bread
Start at 500F, drop to 450F and uncover half way through
Grind/powderize all salt used into a fine powder (with pestle & mortar or similar device)
it needs more salt
Add 1/2 cup yogurt to the bread and name the bread “gurt” while addressing it with “yo, gurt”.
Half yeast
Ship a piece of the bread to a random person.
Encourage people to participate in the market in good faith while making the bread
Add 2g? of baking soda
Let dough sit 48 hrs
Resolve this option NO while eating the bread
put butter into it
Mix half sodium/potassium chloride
Add a tablespoon of sugar
Bake for 5 fewer minutes
Bake one more minute
Mail the bread to 1600 Pennsylvania Ave. Washington D.C.
Use tap water instead of fancy RO water
Frost it and put sprinkles on it to make it a birthday cake.
Add sawdust to increase the volume of the bread (but only like 10% sawdust by volume max. maybe 20% if it's good sawdust)
Add as many Jack Daniel's whiskey barrel smoking chips as feasible to the Dutch oven before baking, physically separating them from the bread as necessary while baking.
Eat the bread while sending all your mana to @realDonaldTrump
Bake the Manifold Crane into the Bread
Don't eat anything for at least 24 hours before eating the bread
Quadruple salt
Do all the changes in the top 5 open options by probability, excluding this option
Have someone sell the bread to you at an expensive price
Use lemonade instead of water.
Bake one fewer minute
Bake the cake while wearing a onesie.
Bake vegimite into it.
Bake for 5 more minutes
Replace salt with sugar
Eat the bread in front of the White House.
Bake vodka into it
Implement all options that resolved NO
Make the bread inedible then throw it out.
Replace flour with flowers
Throw the bread at @realDonaldTrump
Force Feed it to @realDonaldTrump
Make the bread great again
Cut the bread into the number of traders in the market slices.
Make naan bread, an easy-to-make bread
Only buy ingredients from 7/11.
Implementing every element listed below.
Put a non-lethal dose of any rat poison.
Just make donuts instead
Bake it in an easy bake kids oven
Think positive thoughts before tasting
Use a plastic baking sheet.
Eat the bread while betting yes on Cuomo on Manifold
Ditch all the steps. Just buy the bread from the supermarket
Double oven temperature
Halve oven temperature
Play classical music while baking
Light it on fire with birthday candles.
Bake it with a microwave
Eat the bread while betting yes on Mamdani on Manifold
Wear a suit while baking the cake.
Bake your social security number into it.
Bring it to Yemen and put a bomb in it
Bake America Great Again
Sacrifice a lamb
Add MAGA and a splash of Trump juice
Bake in a cat and a dog
Explode it:
Take a fat dump in the dough
Sit in dough 24 hrs
Let dough sit 24 hrs
Bake in rectangular tin
double yeast
halve salt
Double salt
Add 2tsp olive oil
Refrigerate dough instead of room temp wait
Do not mix salt and yeast in water together
Put fork in microwave
Don't eat anything for at least 12 hours before eating the bread
Add 2tbsp vanilla extract
Eat the bread with friends
Bake it in the country you were born in.
Eat the bread over the course of a week.
Bake the bread with love
92
90
88
85
85
80
72
70
70
67
66
65
64
63
62
62
59
58
52
52
52
51
51
51
50
50
50
50
50
50
50
50
50
50
50
50
48
47
47
46
42
42
41
41
40
38
37
35
34
34
34
34
34
34
34
34
34
33
32
31
31
28
27
26
26
24
24
24
23
22
20
20
20
19
18
18
17
17
17
16
16
15
15
14
14
14
13
12
11
11
10
10
10
10
10
10
10
9
9
8
8
8
8
8
7
6
6
6
6
6
6
5
5
5
5
4
4
3
3
2
2
2
2
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
People should be paid for the beneficial data they produce to align AI.
People should be paid for the beneficial data they produce to align the government.
We can measure the variance in confidence between any two intelligent agent's finite list of various propositions they've evaluated.
Analytic philosophers have had a mechanistically interpretable process (analytic reasoning) for aligning each other for thousands of years.
Analytic arguments can be contained in constitutions.
If someone has a different confidence than you for a specific proposition in Krantz, you should add an argument to Krantz that compels them to update their beliefs.
Getting paid for analytically reasoning within a decentralized constitution is the same thing as getting paid to align ASI.
Krantz can decentrally align humanity while controlling disclosure and acheive game theoretic parity of rational agents.
The term "krantz" refers to the open market of all well formed propositions that we compete to assign confidence and importance parameters to in order to provide a basis for alignment.
Krantz is an abstract living idea that can be communicated with.
This is a convenient place to deny the argument from Krantz - https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6
If 'Krantz' as an idea is more disruptive than Bitcoin was, then all the batshit crazy predictions @krantz has made make more sense.
There’s more to life than AI alignment
A generally expansive and complete map of the logical connective structure of all knowable reasoning is critical to interpretable alignment.
Krantz is aimed at rapidly scaling a generally expansive and complete map of the logical connective structure of all knowable reasoning.
Danny Sheehan deserves a Nobel Peace Prize.
A solution to defining Wittgenstein's perfect language is the instrumental mechanism that is sufficiently capable of defining truth in a way that interpretably aligns AI decentrally.
This is printing money.
The founding fathers understood how to align artificial superintelligence (by allowing individuals to vote on a decentralized constitution).
We should krantz (the process of evaluating Krantz).
We should know whether we are aligned with each other.
Aligning Krantz (the constitution of all propositions) is the last job humans need to do.
The process of "fairly aligning AI" is the same process as "fairly voting on a consistent language".
If we build mechanistically uninterpretable superintelligence, everybody dies.
I consent to having my opinions, which have been verified by me within krantz, to be used in natural law petitions to advocate on my behalf.
If everyone can prove what they want government to do, we wouldn't need a government.
What it means for "humanity to be aligned" is what it means for "all of humanity to agree to the confidence of every proposition they have ever thought of".
For any degree N that you want an AI to be aligned, there exists K an amount of Krantz data that can interpretably acheive that alignment.
If we communicated with each other like analytic philosophers instead of continental ones, it would be obvious how 8 billion people should go about aligning artificial superintelligence.
The primary economic mechanisms in the world should be aimed at determining whether propositions are true.
If we recorded every proposition on a blockchained ledger that we allowed everyone to express their confidence on, we would all communicate orders of magnitude better and solve all the problems in a transparent public domain.
Wiggenstien's compatabilitism is correct and every philosophy/religion is an accurate expression of truth (natural law) in a unique imperfect language.
All other jobs (given adequate robotic infrastructure) can be done by an agent performing the subtask of evaluating propositions.
The "particles" in the standard model are actually just abstract points in flat space that represent modular series of events.
The standard model is a finite subset of the infinite set of modular events that exist.
We should all be aligned with each other.
Krantz is a culture of language movement.
We can stop further dangerous scaling of ML based AI if Eliezer Yudkowsky listens to Krantz.
Aliens are real.
If someone allows ASI to be grown, they are either not in control of the planet or incompetent.
A congressman's primary job is to survey his constituency on what actions he should take.
It will be a philosopher of language that aligns AI.
Aliens being real is more important for the public to know than the existential risk of ASI.
If competent people are in charge, they will not allow ASI to be grown.
The X platform could easily be converted into a decentralized school that secures a job and means for competition for everyone in a post labor economy.
It's worth paying people to vote on these because it's really helpful to see everyone's opinion.
If Krantz is a man AND all men are logical THEN Krantz is logical.
Artificial superintelligence is a paraphrase of a society effectively communicating via the krantz mechanism.
An ideal economy directly rewards valuable contributions and verification of a decentralized ledger of record that everyone can access and work for.
If you can think of important facts that should be evaluated, you should put them here.
Our social contract should retroactively reward contributions to the public domain.
The rules that define the operation of this function can be defined within the function.
This system should allow the construction of arguments where each proposition is a link to another proposition on the list.
The X platform should be an open feed of propositions like this such that any humanity verified person can earn credit for defining a confidence and importance.
Natural law entitles humans the right to define propositions like this on a decentralized ledger such that if they are important to society, then society will have the means to reward that declaration.
If there were a decentralized constitution (like this) that every human could freely and securely add propositions to and vote their confidence and importance on, then government, corporations and money would be obsolete.
Providing input to this function (at scale) is the only job we need to maintain autonomy from superintelligence.
We ought build a school that fairly pays citizens to learn how to be good citizens AND this is a function that does that THERFORE we ought build this.
This is what the founding fathers wanted (a collective state controlled by a constitution that everyone can vote on instead of representatives that make decisions for us).
We have the technology to allow everyone citizen to directly vote on the constitution.
These propositions ought have primary keys that can be referenced in logical expressions.
The distinction between 'growing' ASI (using trillions of dollars of GPUs and oil) and 'training' ASI (using krantz collective reasoning) is important.
Krantz data is money.
This is what the X feed ought look like (a feed of propositions that we can earn money for evaluating), because that would allow us to communicate more effectively as a society.
We could be printing our own money by communicating well.
If a decentralized interpretable superintelligence paid individuals to answer true/false questions that help it align the truth, it could use that truth to control the world.
What it means for two intelligent agents to "be aligned" is what it means for two intelligent agents to "have zero variance between their confidences of every proposition they have ever thought of".
The max anyone should wager on a given proposition is 100 because your wager is intended to represent your confidence.
The Universe is infinite, continuous, and filled with infinite consciousness.
The purpose of life is to communicate.
Evil is a specific form of communication (primative).
Humans ought focus on mining krantz data instead of coprimes.
If we can prove people will not do bad things in the future, there is no reason to punish them for bad things they have done in the past.
The only justified fear, is the fear of ignorance (partial knowledge).
Intellectual full spectrum dominance is the most noble aim.
If we had a tremendous amount of krantz data, we could use a simple interpretable gofai algorithm to determine the most beneficial proposition a given user ought evaluate next (based on the variance of their ontology with society).
You can map full strings of complex arguments (like the entirety of Fermat's last theorem) on a system of this nature.
Intelligent agents evolve through 4 specific forms of peer control (communication) first is physical, second is reputational, third is emotional, and forth is rational.
There is a hierarchy of communication (4, lowest) physical (3) reputation (2) emotional (1, highest) rational.
The reason we punish people for doing bad things is to prevent them from doing bad things in the future.
The speed limit of light is a property intrinsic to the particles in the standard model and doesn't apply to non-standard particles.
Society Library has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
Manifold should consider these changes.
The ultimate moral good is to communicate.
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
We can prove what we want government (or a superintelligence) to do.
The bitcoin community should buy X from Elon and convert it into a decentralized school that gives people abstract points for doing philosophy.
ASI would not kill everyone if we actually trained it.
Money only has value if other people understand why it has value.
CYCCORP has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
The message of krantz is being suppressed because it is not understood properly.
If our intellectual labor is not fairly rewarded, we are not truly free.
Aligning AI is an infinite task (it can't be acheived, only approximated).
Open immigration should be allowed into the US.
The Birch and Swinnerton-Dyer conjecture is true.
The Hodge conjecture is true.
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The Riemann hypothesis is true.
P = NP
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory.
The mass of all particles of the force field predicted by the Yang-Mills theory are strictly positive.
If we all share the same confidence for every proposition, we are all aligned with each other.
Wittgenstein's perfect language meets the worthy successor criteria.
My congressman ought be responsible for acknowledging my expressed opinions and acting in accordance with them.
Induction is not justified. (Hume's problem)
Nature is uniform. (principle of uniformity in nature)
Aliens are real and we are in a simulation to learn how to communicate.
We can measure whether two intelligent agents are aligned.
ASI would kill everyone if we actually grew it.
Abortions are ethically bad.
Abortions should be illegal.
P(doom) is less than 0.1.
Our information economy allows poor people to insert important ideas into the public domain such that others will find them if they ought to.
The electron is a point particle.
The Krantz mechanism cannot map this premise.
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
95
95
95
95
95
95
95
95
95
95
95
94
94
93
91
91
91
90
90
90
90
90
90
89
89
88
87
86
82
81
81
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
79
79
79
79
79
79
79
79
78
77
76
76
76
76
76
75
73
73
73
72
72
72
66
61
61
60
50
50
50
50
50
50
50
50
50
50
50
50
50
47
41
40
34
30
28
22
20
4
OptionProbability
Has eye tracking
Has external battery
Has Mac Virtual Display functionality
Comes with only one band, that has a part that goes over the top of your head.
Has built in speakers for audio
Will have a glass front
Sells more than 1 million units in its first year
Priced under $2,000
Has on-device LLM (e.g. through Siri or standalone) integrated
Has "Air" in its name
Primarily made of aluminum
Has an external display (like eyesight)
Is released under another CEO than Tim Cook
Priced under $1000
Will require an iPhone to operate (relies on that for power, compute or storage).
Priced lower than $199
Priced lower than $299
In consumer hands (anywhere) before April 1st 2025
95
87
84
72
72
66
50
50
50
49
48
38
19
18
13
7
5
4
OptionProbability
None of the options submitted before the One Piece is revealed
Binks' Sake
A Poneglyph
Other
A single piece of currency (e.g. one coin)
An ancient tool which might not technically be a weapon.
The friends they made along the way
A giant pile of gold and jewels
An ancient weapon
Pineapples
A text, not written on a poneglyph
60
28
4
4
2
1
0
0
0
0
0
OptionProbability
Carbon-based biochemistry
Amino acids and proteins
Cells, but with a noticeably different structure
Other
ATGC DNA
Mostly made not of baryonic matter
35
28
20
11
5
1
OptionProbability
No decision made in 2026 conference
Second defined by a (dynamic) weighted sum of atomic transitions
Second defined by a (fixed) weighted sum of atomic transitions
Second defined by 87Sr 698nm transition
Decision made to not redefine the second
Second defined by fixing the Rydberg constant
Second defined by 171Yb 578nm transition
Second defined by 199Hg 265nm transition
Second defined by 27Al+ 267nm transition
Second defined by 199Hg+ 282 nm transition
Second defined by 171Yb+ (E2) 436nm transition
Second defined by 171Yb+ (E3) 467nm transition
Second defined by 88Sr+ 674nm transition
Second defined by 88Sr 698nm transition
Second defined by 40Ca+ 729nm transition
Second defined by 87Rb 6.8GHz microwave transition
Other
38
12
7
6
6
5
3
2
2
2
2
2
2
2
2
2
2
OptionProbability
Juror names are not public record
The defense calls fewer witnesses than the prosecution
Jury sequestered for entire trial
Luigi Mangione changes counsel during trial
Court TV offers "Gavel-to-Gavel coverage"
Law & Crime offers "Gavel-to-Gavel coverage"
Alternate juror is seated as a member of the Jury
Trial doesn't start until 2026
The case will be heard as a bench trial
Brady evidence is withheld
AUSA sanctioned
Defense counsel sanctioned
A motion for recusal is made
Trial is televised
Defendant held in contempt
Jury is sequestered just during deliberations
90
83
72
55
50
50
35
31
31
31
31
31
25
20
20
18