OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Eliezer finally listens to Krantz.
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Someone solves agent foundations
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Orthogonality Thesis is false.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Ethics turns out to be a precondition of superintelligence
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Humans become transhuman through other means before AGI happens
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
AGI is never built (indefinite global moratorium)
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
We successfully chained God
27
13
11
7
6
6
4
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Other
Crazy Elon
“drug addict elon” or similar
Spacey Elon
subsidy elon
Loser Elon
Whiny Elon
The Man Who Lost His Mind
Entitled elon
Angry elon
Incel Elon
government subsidy guy
Leon Musk
Train wreck Elon
slippery eel / eel elon
Icky Elon
musk the moocher
Mr mars
muskrat elon / elon muskrat
Delusional elon
evil elon
rocket boy / rocket man (like kim jong un)
felon elon / felon musk / elon felon
elon husk
ozempic elon
elmo
Phony Stark
Failed Elon
Space Cadet
Exploding Rocket Man
DOGE Boy
No nickname (ONLY RESOLVES YES IF HIS NICKNAME IS "NO NICKNAME")
Vriska Serket
Elongated Muskrat
Melon Husk
Alien Elon/ Elon the Alien
Mars Man
ev elon
Slimy Elon
Illegal Elon
Emo Elon
Little Elon
Ketamine Elon
Mediocre Musk
Fat Elon
Klon
Space Boy
E Elon
ET Elon
Elon of the Crimson Blade
Medicare Elon
Melon
Xanax
Ketamelon
mental musk
marsboy musk
Elon Melon
Murky Musk
Ketamine Karen
resolves no
jetbrain
elonious munk
Lying Elon
Traitor Elon / Elon the Traitor / Tesla Traitor (anything with Traitor)
Musky Elon
N/A
Oh no!
Elon The Con
Jumping/Jumpy Elon/Musk
Low IQ Elon
Erratic Elon
Buddy
Friend Elon
Great Elon
Fail-on
Eblon
Extraterrestrial
solar
Kooky elon
The Best President
Moron Musk
squealin elon
African Elon / afrikaan elon
autistic Elon
immigrant elon
Evil Musk
Elon musksnake
That guy
Eloon
(This option will not resolve yes)
Elon the idiot
Schemin' Elon
Elon the Pawn
Off the Rails Elon
Mr Musk
Yilong Ma
Mr Elon
Mad Musk
Radical Left Lunatic Elon (or something like that)
Special K
55
11
5
5
4
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
14589
9133
OptionVotes
NO
YES
11829
8825
OptionProbability
Trump publicly suggests, while in office, that he shouldn't have to leave
Trump leaves office when his term ends
Trump attempts something arguably coup-like (e.g. J6), but it fails
Trump supporters kill or hospitalise someone trying to prevent/protest him leaving
Trump leaves office early (e.g. via impeachment or he dies)
Trump remains in office after his term is up
Trump isn't elected (or fails to take office)
90
73
29
26
18
10
0
OptionProbability
Compatible with Switch 1 Joy-Cons (even if only Bluetooth)
Backwards compatible with physical Switch 1 games
Backwards compatible with digital Switch 1 games
Crossplay with Switch 1 in any first-party game released within the first 6 months after launch
Multiple launch SKUs
Launch title (game released on the same day as the system) with "Mario" in the name
Name of the console contains the word "Switch"
The name of the console is correctly leaked over two weeks before it is revealed
Launch day system software includes a Mii maker
Launch title (game released on the same day as the system) with "World" in the name
Launch title (game released on the same day as the system) that also came out/is coming out for Switch 1
Backwards compatible with physical Switch 1 games, AND allows you to play a better looking or performing version of at least one Switch 1 game with the original Switch 1 cartridge within 6 months of the console launching
Joy-Cons can be used as a mouse
A new pro controller will be released on the same day the console comes out
New SKU not available at launch available within one year after release
More than two themes before 2027
Any launch SKU has a MSRP not ending in "9.99" in the US
Name of the console contains the word "Super"
Name of the console contains the word "New"
Any launch SKU has an OLED screen
Over 180 days between reveal and release (July 15 deadline)
Revealed this week (before September 21st, 11:59:59pm ET)
Launch title (game released on the same day as the system) with "Zelda" in the name
Launch day system software includes an internet browser (general-purpose browser that deliberately allows access to the wider Web, like the 3DS or Wii U browser)
No launch SKU has 12GB RAM
No launch SKU has 256GB storage
Name of at least one launch SKU contains "XL"
The cheapest launch SKU costs ≤$300
Will have some kind of "achievements" or "trophies" system (under any name)
The Joy-Cons have inside-out tracking (via camera or LiDAR)
Has a social media or video-sharing service called Vidmiio (announced or available by launch)
First-party Joy-Cons attach or detach using electromagnets
A launch SKU has Joy-Cons that have a non-grayscale shell (as opposed to the black shells in the trailer).
100
100
100
100
100
100
100
100
100
100
100
100
100
100
99
40
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Make the bread taste good
Don't eat anything for at least 48 hours before eating the bread
Stretch-and-fold after mixing, 3x every 30 min
Create indentation, fill with melted cheese and butter
Bake on upside-down sheet pan, covered with Dutch oven
Resolve this option YES while eating the bread
Donate the bread to a food pantry, homeless person, or someone else in need
Use sourdough instead of yeast
Sprinkle 3 grams of flaky sea salt on top of each loaf before the second bake
Watch the video
Autolyse 20 minutes
3 iterations of stretch-and-fold, at any time during the 14h waiting period. Minimum wait time between iterations 1 hour
Make a poolish 12 h ahead: 100 g flour + 100 g water + 0.8 g yeast (0.1 %). After it ferments, use this poolish in place of 100 g flour and 100 g water in the final dough.
Bake it with your best friend.
Add 50g honey
Swap 200ml water for milk
Incorporate a whole grain flour (buckwheat for example)
More steam! Either spritz with more water (preferably hot) or actually pour some boiling water in just before closing the lid.
Bake for an amount of minutes equal to the percent this market answer is at when it comes time to begin baking. (Maintain the ±3 minute tolerances and the 2:1 ratio of time before:after the water spritz.)
Use King Arthur Bread Flour instead of All-Purpose
Decompose it into infinite spheres, then a few parts per sphere, rotate the spheres by arccos(1/3), unite them and you will find 2 chilis (Banach-Tarski)
Let dough rise on counter only until double volume or 2h max, any time longer in fridge
Add lots of butter (0.2 ml per gram)
Use 50% whole grain flour
Ditch current process, do everything the same as the video
Toast the bread
Eat the bread while punching @realDonaldTrump in the face
Eat the bread while watching your mana balance steadily tick to (M)0
Throw the bread at a telescope
Add 50g sugar
Put a baking rack in the Dutch oven before putting the loaf in, raising the loaf off the floor and lofting it over a layer of air.
Replace all water spritz steps with a basting of extra virgin olive oil.
Use flour made from an unconventional grain e.g. barley, millet, oats, rye, sorghum, maize etc.
Assume the chili is not in the interval [0,1], square it for more chili, if it is in (0,1), take the square root, else (equals 0 or 1) add 1 to it.
Assume the chili is in the interval (0,1), square it for less chili, if it is in (1,infinity) take the square root, if it is in (-infinity,0) take the negative of the square of the of the chile, else (equals 0 or 1) subtract 1 from it.
Get your friends to help you make a batch ten times the size, but add a Pepper X (2.7M Scoville heat units) to the mixture
Add 1tsp of diastatic malt powder per 3cps of flour
replace 10% of flour with farina bona
Bake the bread into a fun shape, like a fish, or an octagon
While the bread is baking, tip every user who voted "Yes" on this option 25 Mana
Add 50g vital wheat gluten
Give ChatGPT your current recipe as well your take on what optimal bread tastes like, then take that advice for your next bake
Bread flour, 3x yeast, cut rise to ~3h
Use whole wheat to improve the nutrition of the bread
Add an amount of MSG equivalent to half the current salt content
Place small ice cubes between parchment and pot instead of water
Cook the bread with a rod/puck of aluminum foil (or similar) in the core in an attempt to conduct heat through the center of the bread, cooking it evenly like a doughnut.
Make all of the ingredients from scratch.
Add a pinch of sugar
Make the bread edible then throw it in
Buy bread from a michelin star restaurant.
Increase water by 50 g
Drink vodka while eating the bread
Cover bread with damp paper towel instead of initial water spritz. Rehydrate paper towel during 2nd spritz. Remove paper towel before placing on cooling rack.
Do FOLDED
Quit Manifold into the bread.
Kill the bread into Manifold.
Improve the bread
Start at 500F, drop to 450F and uncover half way through
Grind/powderize all salt used into a fine powder (with pestle & mortar or similar device)
it needs more salt
Add 1/2 cup yogurt to the bread and name the bread “gurt” while addressing it with “yo, gurt”.
Half yeast
Ship a piece of the bread to a random person.
Encourage people to participate in the market in good faith while making the bread
Add 2g? of baking soda
Let dough sit 48 hrs
Resolve this option NO while eating the bread
put butter into it
Mix half sodium/potassium chloride
Add a tablespoon of sugar
Bake for 5 fewer minutes
Bake one more minute
Mail the bread to 1600 Pennsylvania Ave. Washington D.C.
Use tap water instead of fancy RO water
Frost it and put sprinkles on it to make it a birthday cake.
Add sawdust to increase the volume of the bread (but only like 10% sawdust by volume max. maybe 20% if it's good sawdust)
Add as many Jack Daniel's whiskey barrel smoking chips as feasible to the Dutch oven before baking, physically separating them from the bread as necessary while baking.
Eat the bread while sending all your mana to @realDonaldTrump
Bake the Manifold Crane into the Bread
Don't eat anything for at least 24 hours before eating the bread
Quadruple salt
Do all the changes in the top 5 open options by probability, excluding this option
Have someone sell the bread to you at an expensive price
Use lemonade instead of water.
Bake one fewer minute
Bake the cake while wearing a onesie.
Bake vegimite into it.
Bake for 5 more minutes
Replace salt with sugar
Eat the bread in front of the White House.
Bake vodka into it
Implement all options that resolved NO
Make the bread inedible then throw it out.
Replace flour with flowers
Throw the bread at @realDonaldTrump
Force Feed it to @realDonaldTrump
Make the bread great again
Cut the bread into the number of traders in the market slices.
Make naan bread, an easy-to-make bread
Only buy ingredients from 7/11.
Implementing every element listed below.
Put a non-lethal dose of any rat poison.
Just make donuts instead
Bake it in an easy bake kids oven
Think positive thoughts before tasting
Use a plastic baking sheet.
Eat the bread while betting yes on Cuomo on Manifold
Ditch all the steps. Just buy the bread from the supermarket
Double oven temperature
Halve oven temperature
Play classical music while baking
Light it on fire with birthday candles.
Bake it with a microwave
Eat the bread while betting yes on Mamdani on Manifold
Wear a suit while baking the cake.
Bake your social security number into it.
Bring it to Yemen and put a bomb in it
Bake America Great Again
Sacrifice a lamb
Add MAGA and a splash of Trump juice
Bake in a cat and a dog
Explode it:
Take a fat dump in the dough
Sit in dough 24 hrs
Let dough sit 24 hrs
Bake in rectangular tin
double yeast
halve salt
Double salt
Add 2tsp olive oil
Refrigerate dough instead of room temp wait
Do not mix salt and yeast in water together
Put fork in microwave
Don't eat anything for at least 12 hours before eating the bread
Add 2tbsp vanilla extract
Eat the bread with friends
Bake it in the country you were born in.
Eat the bread over the course of a week.
Bake the bread with love
92
90
88
85
85
80
72
70
70
67
66
65
64
63
62
62
59
58
52
52
52
51
51
51
50
50
50
50
50
50
50
50
50
50
50
50
48
47
47
46
42
42
41
41
40
38
37
35
34
34
34
34
34
34
34
34
34
33
32
31
31
28
27
26
26
24
24
24
23
22
20
20
20
19
18
18
17
17
17
16
16
15
15
14
14
14
13
12
11
11
10
10
10
10
10
10
10
9
9
8
8
8
8
8
7
6
6
6
6
6
6
5
5
5
5
4
4
3
3
2
2
2
2
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
It will have a new character (not just a revision or morph)
It will be released first in 2025
In mid 2026 there wil lbe more concurrent players on StS 2 than on StS 1
Jorbs will play it on twitch live in the first week of release
There will be some game mechanic like ascension levels but they will go higher or take more steps than sts1's ascension 20
The final boss will be the heart (or some obvious natural evolution of that character)
It will have four playable characters on release
All current cards for at least one character will be playable
It will be released first in 2024
83
82
66
60
50
50
50
23
0
OptionProbability
Other
BDSM
Power couple
"Fun while it lasted"
Me + *Her*
Keynesian beauty contest
Spinning rapidly in opposite directions
Polyamory
Monogamy
Relationship Futarchy
Human + Robot
Enemies to Lovers
Co-conspirators
Philosophically incompatible
Conjoined twinks
Horizontal gene transfer
borgie
Relationship Anarchy
Platonic
Wild Lovers
Friends with Benefits
Relationship constitutional republic: people in it have systemically different levels of power and decision-making influence
Forever alone
Glucose guardian/splenda spender/sugar daddy
Hobbesian
Hive / Swarm / Superorganism
Traditional marriage of two cis straight White Christian vanilla people where the man "leads" and is 0-4 years older than the woman, with 2.5 kids (not counting any LGBTQIA+ ones) and a McMansion with 2.5 SUVs/trucks
Universal love
merged consciousness
Rationalussy
Vespertine
liking/reacting to even their most meaningless posts and comments on social media and prediction markets
Matriarchy
Labor Union
Socratic
One with an IPO (Intimate Partnership Offer)
Just being one helluva slut
Beard
Paradixical
Friends with Detriments
NP-complete
The Apprentice
Hell's Kitchen
Stockholm Syndrome
London Syndrome
Financial Domination
The Price is Right
Crows
Single by choice (except it's other people's choice)
Budget horse
assortative
Hubris quest
Fiends with benefits
Envenomation
Polygamy
Polyandry
Ethical Non-Monogamy
Unethical Non-Monogamy
Celibacy
Aromanticism
Solo poly
Open relationship
Polyfidelity
What's a "relationship"?
Reflexive
Symmetric
Transitive
Metamour
Socially enforced monogamy
Unreciprocated love
Asexual romance
Abusive
"This is my emotional support Ex"
FWB
Enemies-To-Lovers
Codependent
Transference
Monogamish
Voyeuristic
Love triangle
Romantic Mutual Suicide
Mutuals
Animalistic
Human + AI
parasocial
Polycule
Cuckold
Throuple
Boyce-Codd normal form
Aristotelian
Partners in crime, like Caroline Ellison and Sam Bankman-Fried
I like big butts and I cannot lie
I like big butts and I cannot tell the truth, how will you escape our dungeon
"I'll never admit to anyone that we met on Tinder"
15
8
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
19342
7418
OptionVotes
YES
NO
14631
6838
OptionProbability
Build, debug and test until its of sufficient quality, a complex piece of software like a mobile app including a backend service
Beating Pokemon games
HCAST - METR
Manually rearrange and overlap 100 random images in an image editor (with no other kinds of edits) to create a recognizable portrait
ARC-AGI (any version)
Wozniak Coffee Test (requires controlling a robot)
Opinion poll of Manifold userbase
Predict the output of an arbitrary set of NAND gates and inputs
60
56
52
52
50
48
34
30