OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
Ethics turns out to be a precondition of superintelligence
Other
Someone solves agent foundations
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Something less inscrutable than matrices works fast enough
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
Orthogonality Thesis is false.
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
There’s some cap on the value extractible from the universe and we already got the 20%
Humans become transhuman through other means before AGI happens
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
AGI is never built (indefinite global moratorium)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
AIs never develop coherent goals
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
We successfully chained God
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
🫸vibealignment🫷
18
13
7
4
4
4
4
4
3
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
19
19
16
9
9
7
6
5
3
2
2
1
1
1
0
0
0
0
OptionProbability
At the end of his term in 2029.
He will die before the end of his term.
He will be impeached, tried in the Senate, and removed before the end of his term.
He will resign (impeached or not) before the end of his term.
The 22nd Amendment will not be repealed, but he will remain in power through unconstitutional means after January 21, 2029
He will be removed by a 25th Amendment action before the end of his term.
The 22nd Amendment will be repealed and he will win re-election to a third term.
He will be overthrown in a violent coup (military or otherwise) and forced out of office. (If this happens, related answers that may occur like death and resignation will resolve NO).
Other
76
13
6
4
1
0
0
0
0
OptionProbability
US citizen
Male
The shooter killed Charlie Kirk
White
The shooter is alive when this market hits 100 traders
Did it on purpose
Has a Reddit, Xitter, or Facebook account
The shooter is a member of domain Eukarya
The shooter is not a extraterrestrial alien
Will the shooter of Charlie Kirk be apprehended alive by the end of September?
Born in the United States
Lives (or has lived) in Utah
Current or former LDS Church (mainline Mormon) member
Firearm used was otherwise lawfully owned under federal and state law
Native English speaker (L1)
Is a registered voter
20-27 years old
used a bolt action rifle
has at least one sibling
at least ten questions in this market resolve YES (excluding questions about other question's resolutions) (excluding N/A)
Is the same person the FBI identified as the “person of interest”
“Extremely Online” (market owner’s subjective judgment)
n/a
Skinny
Is named Tyler Robinson
wrote reference to furry culture on bullet casing
Resided in the same state as the incident
ectomorph body type
had fired a weapon at least once in their life before shooting Charlie Kirk
disliked Charlie Kirk
Acted alone (no accomplices)
Turns themselves in to the police / FBI
He killed Kirk to stop the spread of hatred.
Owns steam profile 76561198159427286
at least a quarter of the questions in this market resolve YES (excluding questions about other question's resolutions) (excluding N/A)
has expressed positive sentiment towards furries
Had bullets with engraved messages on them
Hunter
Accident please N/A
has purchased furry-related media or items
Gamer
Lesbian, Gay, Bisexual, or Transgender
Unemployed
iphone user
Motivated primarily by political ideology
is/was in a sexual relationship with a transwoman
Will plead guilty
Not a virgin
Is left wing
has a furaffinity account
Owns a car
Lives (or has lived) in the Provo–Orem metropolitan area
legally acquired the gun used to kill Charlie Kirk
Inconsistent, incoherent, or idiosyncratic political ideology
Has/had a partner named Lance Twiggs
Furry
actually has an indecipherable ideology rooted deeply within internet culture that doesn’t clearly fall on either side.
Communicated with peers primarily through Discord
Owns physical furry paraphernalia
extra charges are filed against him based on evidence discovered on his digital devices
Purchased and played Furry Shades of Gay on Steam
Strongly opposes zionism
Is sentenced to death
There is anecdotal evidence that they have a KiwiFarms account
N/A (oops)
is/was in a sexual relationship with a furry
Dies in prison
Has posted on r/toiletpaperusa
has used the word yiff in written communication
Conspired with an AI chatbot about this
Will kill again (including suicide)
Has ADHD
is/was in a sexual relationship with a furry transwoman
Any published writings (including blogs, tweets, etc.) supporting the July 2024 attempted assassination of Donald Trump
had attended any event featuring kirk prior to the one at utah valley university
6 ft tall
Kash Patel alludes to his sexual deviancy
Has at least US$5.000 of credit card debt
was prescribed psychiatric medication
has vore kink
Anarchist (self-described; any "anarcho-" label counts)
Owned the firearm used
Went to furry events
cross-dresser
All questions in this market are resolved before 2026
Purposefully waited until Kirk mentioned shootings to fire
JD Vance alludes to his sexual deviancy
Elon Musk alludes to his sexual deviancy
Avowed left-leaning ideology (broadly construed: Democrat, socialist, communist, Green all count)
White cis-hetero Christian man
Any published writings (including blogs, tweets, etc.) supporting Luigi Mangione
Participates in this market
student at any university
Owns a fursuit
Brony or a fan of My Little Pony
Radical Centrist/JREG fan
Avowed right-leaning ideology (broadly construed: Republican, MAGA, right-libertarian, neo-Nazi, monarchist all count)
Wrote a manifesto before the shooting
Has Reddit account with 10k+ karma
Has a Bluesky account
There is a broad consensus that they have a KiwiFarms account
Posted about Charlie Kirk on social media before the shooting
psychedelics user
at least half of the questions in this market resolve YES (excluding questions about other question's resolutions) (excluding N/A)
Contact with CVLT/764/similar
made online adult/sexual content (e.g. onlyfans)
Was on an FBI watchlist
Had diagnosed mental illness
Former or current Charlie Kirk fan
Will be convicted by EOY 2026
J.K. Rowling alludes to his sexual deviancy
Involved in Israel/Palestine activism on either side
Fan fiction author
Femboy
History of illegal drug addiction
Escapes carceral detention
Was awarded a medal or award for marksmanship at any given point in time
History of domestic violence
Is right wing
has significant student load debt
Shooting was motivated by gun legislation in some way
Transgender
groyper
Left-handed
Had felony conviction previously
Current or former Calvary Chapel or other Pentecostal church member
Current or previous student at University of Utah
is Mossad
Was a student at UVU
Dead, as of October 10th (or earlier)
Motivated by personal grudge rather than politics (e.g. sexual jealousy or other personal-life motive)
Incel
Any documented connection to O9A (Order of Nine Angles, esoteric right-accelerationst group)
Accelerationist (any subtype)
Wanted to create pretext for violence against leftists
A time traveller trying to avoid the worst
Has Twitter account with more than 2000 tweets
White Nationalist
The shooter will read "If Anyone Builds It..." (either while on the run or in custody) and likes the book
Current or former Roman Catholic
Involved in Effective Altruism
Was motivated by the Trump Epstein cover up?
Has a parent who drives for Uber
decel
Will get killed by law enforcement
Was a fan of Charlie Kirk
The shooter is a professional hitman, who does not care much about politics or ideology
Age >= 40 (on the day of the shooting)
Used a handgun
Ex-military
Not arrested as of midnight EST 9/14
Has a PhD
Has a Manifold account
Muslim
Veteran
History of alcoholism
Will share their first name with a famous video game character
Teenager
50 years or older
Has a master's degree
Will be apprehended at McDonald's, like Luigi Mangione.
Female Ninja Type Sniper
Is or was a computer science student
Ex law enforcement
Married
Parent
Manifold assists with his capture
Had a wikipedia page prior to 9/10/2025
Not arrested before EOY 2025
Fled the country after the shooting
Personally knew Charlie Kirk
Enjoys diplomatic immunity
Indian national
The shooter was an AI or an android controlled by AI
Iranian national
Registered Democrat
Will be convicted by EOY 2025
Will be apprehended at a McDonald's
is in fact three racoons dressed in a trenchcoat
Meant to hit an apple on Charlie Kirk's head
used telekinesis to correct the bullet's trajectory
The shooter is a member of domain Bacteria
100
100
100
100
100
100
100
100
100
99
99
99
99
99
99
99
99
99
99
99
99
99
99
98
98
98
97
97
97
97
95
95
95
92
91
89
88
86
86
86
85
81
81
81
79
79
78
72
71
70
69
67
66
65
65
63
62
60
58
56
55
54
54
50
50
49
47
46
46
42
42
40
40
39
38
37
37
35
35
34
32
32
32
32
30
30
28
27
26
26
25
25
23
23
21
21
19
17
17
16
16
15
15
14
13
13
12
11
11
11
11
10
10
10
9
9
8
8
8
8
7
7
7
7
5
5
5
5
4
4
4
4
4
3
3
3
3
3
3
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
OptionVotes
YES
NO
1688
650
OptionProbability
Ron Weasley is a redhead
Harry Potter is white
At least one named character from the book has their race changed
Snape is black
An actor who acted in the movies returns for the show
Harry, Ron, and Hermione’s actors will all be British
Fred and George are twins irl
Hagrid is played by an actor who is under 6'4"
Harry Potter doesn't cast a single spoken, working spell in the first episode
Dumbledore casts a spell (spoken or wordlessly)
McGonagall performs an animagus transformation (human2cat or cat2human)
A character cut from the movies appears (ie Theodore Nott)
Ron Weasley doesn't cast a single, spoken, working spell in the first two episodes
A History of Magic lesson is shown on screen
Premieres in 2027
Malfoy has white blonde hair
Quirrel is wearing a head covering when Harry first meets him
There’s a scene set before Harry is born
JK Rowling is credited as both an executive producer and original writer
The potion riddle guarding the Stone will be featured
Peeves is a reoccurring character
It's woke
Cornelius Fudge is shown on screen
Arthur Weasley is shown on screen
A scene depicts Voldemort trying to kill baby Harry
Harry first sees Hogwarts castle in episode 2
Hagrid says "You’re a wizard, Harry"
mandrake root on screen
Hermione’s parent(s) shown on screen
80% or higher on rotten tomatoes
Homosexual interaction between some characters will be presented
Hermione is white
A house elf is shown on screen
Premieres on strongly symbolic date (like July 31, 21.12)
Arabella Figg is mentioned by first or last name
It will be torrentfreak.com's "Most Pirated" TV show for its year of release or the following year
Luna Lovegood, Cho Chang, or Cedric Diggory are mentioned by first or last name, or are in the credits
Hagrid ties Vernon’s gun into a knot
Harry is shown holding more than three different wands at Ollivander’s
An actor who appeared in any of the Jackass films receives a credit on IMDB related to the show
At least one named character from the book has their gender swapped
The Quibbler is shown or mentioned
Harry visits Diagon Alley in episode 1
Goblins are still represented as anti-semitic caricatures
Harry visits Platform 9 3/4 in episode 1
It ends on a cliff hanger
The Weasley's Ford Anglia is seen flying
An Astronomy lesson is shown on screen
"Voldemort" has a silent "t"
The Flying Ford Anglia is seen.
Harry first sees Hogwarts castle in episode 3
Hagrid presents a cake with writing on it to Harry and the writing has no misspellings
Harry only reaches Hogwarts in the last 10 minutes of the first episode
At least one of the actors is transgender
VOLDEMORT HAS A NOSE
Dobby makes an appearance
The intro theme song will have at least one obvious English word
Premieres in 2026
Quirrel shakes Harry’s hand during their first meeting
90% or higher on rotten tomatoes
Any Harry Potter fanfic is referenced (either explicitly as judged by market creator, or confirmed by someone who works on the show)
Harry Potter doesn't cast a single, spoken, working spell in the first three episode
We see a wizarding school other than Hogwarts
Features an explicitly transgender character
JK Rowling makes a cameo appearance
Hagrid is played by an actor with a cognitive disability
Hermione is black
Zendaya is cast in the show
Hermione is Indian
Smartphone shown within Hogwarts
Voldemort is a woman
Awkwafina is cast in the show
There will be seven CGI dwarves
set in 2025
set in the 2020s
Rita Skeeter will have an explicit trans identity
Keir Starmer is in it
Gandalf is black
Yudkowsky makes an appearance
We get AGI before it premieres
Hagrid is black
Fred and George have the same actor
HPMOR is referenced
One or more of Hermione, Ron, and Harry have their genders swapped.
Harry, Ron, and Hermione will all be transgender
99
98
97
97
95
95
95
94
94
94
94
91
90
88
84
83
83
83
78
78
74
74
73
68
66
65
65
65
64
63
63
58
55
55
53
52
50
50
45
43
42
41
36
34
34
33
32
32
31
30
29
29
27
25
25
25
23
22
21
20
20
15
14
13
12
11
10
10
9
8
8
8
6
6
6
5
5
4
3
3
3
2
2
1
1
OptionProbability
Stretch-and-fold after mixing, 3x every 30 min
Place small ice cubes between parchment and pot instead of water
Add 1tsp of diastatic malt powder per 3cps of flour
Use tap water instead of fancy RO water
put butter into it
Toast the bread
Donate the bread to a food pantry, homeless person, or someone else in need
Add lots of butter (0.2 ml per gram)
Half yeast
Bake it with your best friend.
Use whole wheat to improve the nutrition of the bread
Bake for 5 more minutes
Sprinkle 3 grams of flaky sea salt on top of each loaf before the second bake
Replace all water spritz steps with a basting of extra virgin olive oil.
Diastatic malt (~1% baker's percentage) = happier yeast
Serve the bread hot
Do a second rise
Create indentation, fill with melted cheese and butter
don't eat anything for at least 2400 hours before eating the bread
Cut into the dough right before baking looks destructive to improve the appearance
Sell your bread at an auction and donate the money to those in immigration detention prisons.
3 iterations of stretch-and-fold, at any time during the 14h waiting period. Minimum wait time between iterations 1 hour
Use sourdough instead of yeast
Do it with a good spirit in your heart, or ask someone with a good spirit to do it for you. But don’t watch while they do it.
Make banana bread
Sprinkle sesame seeds evenly over the top
Short advice: Start baking at 260°C for strong rise, then reduce to 230°C and uncover halfway to achieve even browning and a crisp crust. 🍞
Add garlic
Give ChatGPT your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Try baking a little more "bien cuit". If the image is indicative, your loaves may be quite "blonde".
Do all the changes in the top 5 open options by probability, excluding this option
put ketchup and cheese on it
Replace some of the water with an egg (eg. remove 25g of water for a 50g egg)
Add slurs to it
Ask ChatGPT (GPT-5, with thinking enabled) for suggestions on improving the bread, with this market description, then do all of them.
Just freeze the ready bread, then slowly bake it until it’s hot inside. It will give you a crustier crumb, contain less moisture, and taste better.
Brush on an egg wash
Don't eat anything for at least 48 hours before eating the bread
Make the bread taste good
Bake for 15 more minutes
Invest in a "Bakers Steel" for better heat retention and oven spring. It would mean graduating from a dutch oven though.
If your city uses artesian water, replace plastic bottled water with tap water. It will add natural, healthy alkalinity to your bread.
Don't eat anything for at least 24 hours before eating the bread
Bake for an amount of minutes equal to the percent this market answer is at when it comes time to begin baking. (Maintain the ±3 minute tolerances and the 2:1 ratio of time before:after the water spritz.)
Watch the video
Ditch current process, do everything the same as the video
Make naan bread, an easy-to-make bread
Bread flour, 3x yeast, cut rise to ~3h
Eat the bread while punching @realDonaldTrump in the face
Eat the bread while watching your mana balance steadily tick to (M)0
Throw the bread at a telescope
Cut bread into loaves before serving
Cut bread into ≤0.4inch slices, toast before serving
Invite your taste-testers to make the bread with you
Tarriff the bread-making process with a 10% reduction of all ingredients where actual physical money is required to purchase them, until it “shrinkflates,” but try to keep the same volume. Do not reduce any free ingredients.
Standardize a separate list of process features to keep track of independently of all other tests and use the cross entropy method to tune them to maximize your bread preference
Add 2 tbsp vanilla cake mix
Use soda instead of water (clear, orange, yellow, etc. soda is ok. Don’t use a purple/brown soda as that would make it not look good)
Taste the bread
Substitute 75 g of your flour with spelt flour
Don't automatically "Heat water to 30±1 °C". Instead, aim for a desired dough temperature (DDT) of 25-26°C. 30°C water is too hot for summer, and potentially too cool for winter.
Add melatonin to the bread and eat before you sleep (do safely)
While the bread is baking, tip every user who voted "Yes" on this option 25 Mana
Use a food-grade, human-approved vitamin D supplement in the correct dosage for testers with vitamin D deficiency
Use a convection oven/setting
Add 1/2 scoop whey protein powder
Give Gemini your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Add 6.25±1.25 g lemon juice when mixing in water to yeast and salt jug
Replace part of the flour in the dough with freshly crushed hemp seeds. It will make the bread a little bit sweeter, especially appealing for Canadians.
Only use tap water from specifically New York City
Make the bread great again
Decompose it into infinite spheres, then a few parts per sphere, rotate the spheres by arccos(1/3), unite them and you will find 2 chilis (Banach-Tarski)
Bake the Manifold Crane into the Bread
Make the bread edible then throw it in
Drink vodka while eating the bread
Do FOLDED
Quit Manifold into the bread.
Kill the bread into Manifold.
Assume the chili is not in the interval [0,1], square it for more chili, if it is in (0,1), take the square root, else (equals 0 or 1) add 1 to it.
Assume the chili is in the interval (0,1), square it for less chili, if it is in (1,infinity) take the square root, if it is in (-infinity,0) take the negative of the square of the of the chile, else (equals 0 or 1) subtract 1 from it.
Add a tablespoon of sugar
Bake one more minute
replace 10% of flour with farina bona
Grind/powderize all salt used into a fine powder (with pestle & mortar or similar device)
Instead of RO water, use lightly rusty water to improve the nutritional value of the bread with soluble iron.
Increase water by 50 g
Ask yourself if bread is healthier than fruits? No need to improve my bread
Resolve at least one thing here yes or no while baking bread
Wear a suit while baking the cake.
Encourage people to participate in the market in good faith while making the bread
Bake for 5 fewer minutes
Replace salt with sugar
Bake the bread into a fun shape, like a fish, or an octagon
A system view is more appropriate. This is a dynamic, multi-variate, biological and chemical system. For e.g. conditioning salt % AND yeast % AND water temperature based on ingredient and ambient temps.
Replace 10% of flour with milled wheat bran
Put a baking rack in the Dutch oven before putting the loaf in, raising the loaf off the floor and lofting it over a layer of air.
Use flour made from an unconventional grain e.g. barley, millet, oats, rye, sorghum, maize etc.
Cover bread with damp paper towel instead of initial water spritz. Rehydrate paper towel during 2nd spritz. Remove paper towel before placing on cooling rack.
Strawberry jelly filling
Replace 600+/-5g water with 600+/-50g water (eyeball rather than carefully measure)
Have someone sell the bread to you at an expensive price
Add 1/2 cup yogurt to the bread and name the bread “gurt” while addressing it with “yo, gurt”.
Get your friends to help you make a batch ten times the size, but add a Pepper X (2.7M Scoville heat units) to the mixture
Mail the bread to 1600 Pennsylvania Ave. Washington D.C.
Ship a piece of the bread to a random person.
Make all of the ingredients from scratch.
Frost it and put sprinkles on it to make it a birthday cake.
Buy bread from a michelin star restaurant.
Improve the bread
Quadruple salt
Bake your social security number into it.
Bake one fewer minute
Want to improve the value of your bread? Simply bake a piece of gold into it
Bake the cake while wearing a onesie.
Pray to your preferred agricultural/food deity before baking and before eating
Only buy ingredients from 7/11.
Cook the bread with a rod/puck of aluminum foil (or similar) in the core in an attempt to conduct heat through the center of the bread, cooking it evenly like a doughnut.
Test/filter the water for heavy metals
Eat the bread in front of the White House.
Implement all options that resolved NO
Make the bread inedible then throw it out.
Throw the bread at @realDonaldTrump
Force Feed it to @realDonaldTrump
Add as many Jack Daniel's whiskey barrel smoking chips as feasible to the Dutch oven before baking, physically separating them from the bread as necessary while baking.
Add caffeine to the bread
Cut the bread into the number of traders in the market slices.
make the bread bounce
Implementing every element listed below.
Put a non-lethal dose of any rat poison.
Just make donuts instead
Bake it in an easy bake kids oven
Use a plastic baking sheet.
Eat the bread while betting yes on Cuomo on Manifold
Double oven temperature
Bake the bread very thin and add food coloring to make it have the US flag. Don’t allow it to touch the ground, illuminate at night, fold 13 times properly, and pledge allegiance before eating.
Don’t use usual water (room temperature) for the dough - that water’s only for toilets. Use electrolyte drinks instead with ice cubes; they make the dough taste better and add extra nutrition.
Light it on fire with birthday candles.
Bake it with a microwave
Halve oven temperature
Eat the bread while betting yes on Mamdani on Manifold
Step on it
it needs more salt
Bring it to Yemen and put a bomb in it
Bake America Great Again
Give the bread a name in a ritual ceremony and baptise it, with pre-blessed holy water if a priest isn't available
Sacrifice a lamb
Add MAGA and a splash of Trump juice
Use lemonade instead of water.
Bake in a cat and a dog
Explode it:
5 parts cyanide/ 1 part water/ 1 part sand
say 6 7 67 times before making the bread
Take a fat dump in the dough
Sit in dough 24 hrs
Replace flour with flowers
Let dough sit 24 hrs
Mix half sodium/potassium chloride
Add 2g? of baking soda
Bake in rectangular tin
Add 50g vital wheat gluten
double yeast
halve salt
Double salt
Add 2tsp olive oil
Refrigerate dough instead of room temp wait
Start at 500F, drop to 450F and uncover half way through
Do not mix salt and yeast in water together
Autolyse 20 minutes
Let dough rise on counter only until double volume or 2h max, any time longer in fridge
Think positive thoughts before tasting
Put fork in microwave
Don't eat anything for at least 12 hours before eating the bread
Add 2tbsp vanilla extract
Play classical music while baking
Add a pinch of sugar
Bake on upside-down sheet pan, covered with Dutch oven
Eat the bread with friends
Bake vegimite into it.
Bake vodka into it
Bake it in the country you were born in.
Let dough sit 48 hrs
Resolve this option YES while eating the bread
Ditch all the steps. Just buy the bread from the supermarket
Eat the bread over the course of a week.
Use 50% whole grain flour
Bake the bread with love
Use King Arthur Bread Flour instead of All-Purpose
Add sawdust to increase the volume of the bread (but only like 10% sawdust by volume max. maybe 20% if it's good sawdust)
More steam! Either spritz with more water (preferably hot) or actually pour some boiling water in just before closing the lid.
Resolve this option NO while eating the bread
Incorporate a whole grain flour (buckwheat for example)
Add 50g sugar
Add 50g honey
Swap 200ml water for milk
Make a poolish 12 h ahead: 100 g flour + 100 g water + 0.8 g yeast (0.1 %). After it ferments, use this poolish in place of 100 g flour and 100 g water in the final dough.
Add an amount of MSG equivalent to half the current salt content
Eat the bread while sending all your mana to @realDonaldTrump
Add banana
Add poppy seeds
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
89
85
80
78
78
77
74
73
72
69
69
69
67
67
66
66
62
62
59
58
58
57
57
56
55
55
54
51
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
47
45
43
43
42
41
41
41
40
39
37
35
34
34
34
34
34
34
34
34
33
33
31
31
31
30
29
28
26
26
26
26
26
26
26
25
24
23
22
21
20
20
20
19
18
18
18
17
15
15
14
14
14
13
13
12
11
11
10
10
10
10
10
10
10
9
9
8
8
8
7
6
6
6
6
6
5
5
5
5
5
4
3
3
3
2
2
2
2
2
2
2
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Trump publicly suggests, while in office, that he shouldn't have to leave
Trump leaves office when his term ends
Trump attempts something arguably coup-like (e.g. J6), but it fails
Trump supporters kill or hospitalise someone trying to prevent/protest him leaving
Trump leaves office early (e.g. via impeachment or he dies)
Trump remains in office after his term is up
Trump isn't elected (or fails to take office)
91
80
53
28
19
4
0
OptionProbability
Other
*Markdown* formatting in comments and elsewhere
ANTE: Stake on this to subsidize the market.
The ability to look for related markets in the API.
Linkable Comments
Ability to optionally privately enter your expectation of outcome when making a purchase, for the purpose of having a personal calibration curve. Justification: I find that making calibration curves for myself is fun and helps improve my forecasting, and that my calibration curve gets less accurate after a few months if I stop practicing. But the practice takes effort, which could be piggy-backed on the forecasting activity of participating in prediction markets for less effort than doing it seperately.
Ability to include a short message when resolving a market
Reminders. Allow users to set a custom reminder to return to a market.
Spoiler tags to hide markets about in progress fiction
Merge duplicate Answers for multiple Answer Questions https://manifold.markets/Honourary/will-manifold-markets-add-a-merge-f
Ability to tip users M$ for helpful comments.
Ability to short multiple choice answers
Retroactively close a market, undoing bets after the new close time. (Reward predictions before the events happened, not betting really quickly after hearing the news.)
Resolve market to probabilities other than market probabilities. Useful for betting on outcomes of other markets.
Short answers on free response questions
Graph showing the pool size of a market over time
A meta-market. Simple exchange that lets you place a bid/sell order for shares of whatever market.
Set a time to stop allowing creation of new free response answers separately from the market close time
We need numerical range questions. Many topics are way more informative when expressed that way, over a Yes No.
Ability to attach (private) personal notes to other users (e.g. to keep track of who you've observed be a good market resolver)
A "poll" category of market. You can buy as many votes as you want but the payout is unrelated to which answer wins. (Could be zero, could reverse the system so bettors get 4% and the creator gets the balance, or something else.)
Reducing the number of personal questions (eg. "Will I do ...")
Market Indicies, Allowing Multiple Markets to Be Combined Into One and Automatically Weighted with Drag and Drop Type Feature
Show currently placed on market overviews, like https://manifold.markets/markets (it is easy to forget where loans were used)
Allow to exclude communities/tags on feed/market overview. It would be nice to not have ability to skip all this gambling ("yes iff pool divisible by 2" etc)
Allow to exclude communities/tags on feed/market overview. It would be nice to skip all this gambling ("yes iff pool divisible by 2" etc)
Allow users to edit their comments
In "Your Trades", show each market's M$ pool
Reduced fees for long horizon markets to increase trading volume.
Upload photos with comments. Useful for proof of results and other fun things.
I would like there to be a feature that integrates something like an RSS feed. If key words or phrases about specific event occurs, trading is suspended until the market is either resolved or the RSS notification is tagged as a false positive. As what often happens, the gains of many bettors are wiped out by slow resolutions of markets.
Zoom on the chart
Statements
Kelly
The ability to subsidize markets, putting up money to enhance liquidity and encourage participation, without taking a particular side of the bet. (The thing that the ANTE option on this market is trying to do, just in a more formal and less ad-hoc way, which would also work for YES/NO markets.)
Aggregate your own trades in timeline view (avoid long list).
Earn interest on M$ tied up in long time horizon bets.
Trade fractional mana
When creating a free answer market, add a starting set of answers without betting.
Combinatorial Prediction Markets
Ability to delete comments.
Load pages faster (Android mobile).
More explanation for people who don't know anything about prediction markets.
Mechanism for making decisions (built-in bundle of conditional markets?) with better incentives for betters (not beauty contest)
Filter OUT market categories on the homepage.
Private messages to users
Ability to select multiple categories of markets on the home page.
Trusted users can add category tags to markets that aren't their own
User create their own currency
Zen mode: Hide the probabilities from the user until they decide to buy.
Market Scanner (like stock screeners)
On-site mouseover previews for market & user links (advanced setting, or with held key)
23
19
14
12
10
7
3
3
2
2
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
#144 – Athena Aktipis on why cancer is actually one of the fundamental phenomena in our universe
#145 – Christopher Brown on why slavery abolition wasn't inevitable
#146 – Robert Long on why large language models like GPT (probably) aren't conscious
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals
#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't
#151 – Ajeya Cotra on accidentally teaching AI models to deceive us
#149 – Tim LeBon on how altruistic perfectionism is self-defeating
#150 – Tom Davidson on how quickly AI could transform the world
#152 – Joe Carlsmith on navigating serious philosophical confusion
#153 – Elie Hassenfeld on two big picture critiques of GiveWell's approach, and six lessons from their recent work
#154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters
#155 – Lennart Heim on the compute governance era and what has to come after
#156 – Markus Anderljung on how to regulate cutting-edge AI models
#157 – Ezra Klein on existential risk from AI and what DC could do about it
#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk
#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less
#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI
#163 – Toby Ord on the perils of maximising the good that you do
#166 – Tantum Collins on what he's learned as an AI policy insider at the White House, DeepMind and elsewhere
#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption
#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion
#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives
#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe
#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels
#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down
#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures
#172 – Bryan Caplan on why you should stop reading the news
#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers
#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child
#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models
99
99
53
41
34
31
30
24
18
18
18
17
17
17
17
17
17
17
16
16
16
16
16
15
15
14
14
14
14
14
14
14
14
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
10
8
7
6
6
6
6
6
4
4
4
3
2
2
1
1
1
OptionProbability
Try to meet a county board member
Post flyers with QR code for a signup sheet to show community interest
Scout out locations besides Quincy Park
57
50
50
