OptionProbability
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
19
18
11
8
8
6
5
5
5
5
4
3
1
1
0
0
0
0
OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
AGI is never built (indefinite global moratorium)
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Someone solves agent foundations
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Eliezer finally listens to Krantz.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
Ethics turns out to be a precondition of superintelligence
Humans become transhuman through other means before AGI happens
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
Orthogonality Thesis is false.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
We successfully chained God
21
8
8
7
6
6
5
5
5
4
4
3
3
2
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Arson
Murder
Genocide
2008 bank bailouts
two financial crimes
Sexual assault
Rape
Election fraud
financial crimes veiled as altruism
ballot harvesting
Financial crimes done specifically by someone in power
Racism against Asian people
Similar size financial crime committed by professional like lawyer or accountant who should know better but may have more at stake with risk of being struck off
Abusing/taking advantage of the trust of a person or people who care about you
Forcing someone to listen to Nickelback for 72 hours straight
financial crimes committed while doing a really offensive accent
offering drugs to a minor
stealing from the rich and giving to one specific deranged and violent alcoholic
Not doubling world GDP (more so for poor countries) by means of open borders
Sacrificing a child to R'hllor
Committing a moderately big financial crime (>$1 million)
Committing a really big financial crime (>$10 billion)
Hiring an illegal immigrant as your personal full-time sex slave
Lobbying congress to ban your competitors
buying a lot of drinks for a girl to get her very drunk so she'll hook up with you
Protecting sex-offending priests/pastors by moving them to different parishes
Making the same amount of money as the financial crime, but doing it by stealing catalytic converters off people's cars
Introducing leaded gasoline to the market (in 1924)
Rigging a piano to explode when a certain key is hit and leaving a piece of sheet music on it that requires that note to be played
Stealing a SpaceX Starship or Boeing Starliner
Space Piracy: commandeering ISS, enslaving the crew, plundering it for equipment and using it to attack other spacecraft
Taxing Asian immigrants to pay “slavery reparations’ to Ethiopian immigrants
Passing the Jones act to ban senator jones’ competitors
Setting Bigfoot on fire and throwing him out of a plane above a gathering of cryptozoologists.
marketing sugary processed foods to people despite knowing it will kill millions of them
Giving away free samples of meth at a school
Running for reelection as POTUS and refusing to step aside after showing signs of significant cognitive decline, resulting in an 80% probability that a convicted felon will be elected in your place.
Encouraging 10 people to commit a financial crime
Committing war crime.
Intensive pig farming
Giving away free samples of meth at a big tech company
Giving away free samples of meth at tech startups
Consolidating dictatorial power (e.g. suspending elections, controlling courts, etc.) while maintaining a popular mandate (i.e. significant majority of the country supports you and your actions in accurate, unpressured polls)
Firebombing a major city
Genocide committed by moving foodstuffs out of an area suffering severe famine.
Restricting the rights and privileges of the majority population to consolidate the political and economic power
Farming octopuses for food
Whatever is going on at Boeing
Setting a cryptozoologist on fire and throwing him out of a plane above a gathering of bigfoots.
Destroying a major cloud datacenter facility, with irrecoverable destruction of live user data but no direct deaths
Giving a (hypothetical) IQ-boosting treatment only to the most corrupt, vicious, and malicious people you can find
Doing physical violence to a random person as a collections agent
Threatening physical violence towards a random person's child as a collections agent
Forcing kindergartners to huff jenkem for an entire school day.
Threatening physical violence towards a random person's sibling as a collections agent
Conducting evidence-free civil asset forfeiture
Fighting a sea house with a financial crime and going to McDonald’s and giving a really bad yelp review and suing for a financial crime when you are beating up the sea horse
Octopi farming us for food
Embedding a predatory metaphysical outlook into AI to try to align it with right wing capitalist interests, leading to aeons of s risks being actualized throughout the light cone.
Unintentionally causing a bug that wastes 1 million hours of human time
Transporting 53 polar bears, 14 white tigers, and 2.3 million fire ants to the Antarctic and setting them loose in a penguin colony for a pay per view special dubbed "Polar Pandemonium: Ant-artic Special"
Spending the gains from your financial crime on breeding malaria mosquitoes, giving free samples of meth to poor teenagers, and electing bad politicians
Using a time machine to go back in time and brutally murder someone minutes before they would've died anyways
Aligning superhuman AI with capitalism; see https://manifold.markets/KarlK/how-friendly-is-capitalism-does-cap
Wearing a magic shirt that has a 5% chance of making each individual who sees it commit a financial crime as you traverse a major metropolitan city (New York, London, Tokyo, etc)
Falsifying evidence that an afterlife exists and profiting from the publication of this information
Enslaving Joe Biden and Jimmy Carter
Octopuses farming people who correct those who say ‘octopi’ for food
Enslaving octopuses to farm dolphins for food
Enslaving journalists to farm octopuses for food
Wrongfully accusing someone of that crime while knowing they’re innocent
Crashing the Titantic, leading to it sinking
Free ice cream, at taxpayer expense, but only for gingers
Committing a Financial Crime with Shoes On The Bed
Embezzling money from a charity opposed to farming octopuses
Creating Hell, making it possible that humans suffer infinitely for the actions of their finite life
Ressurecting the Rocky Mountain Locust (Melanoplus spretus)
creating misaligned AI that tiles the universe with octopus farms
Forcing an octopus to commit sepekku
forcing an octopus to commit a financial crime
Murdering 5,000 people by feeding them to all animals of the sea, including aquaman, mermaids, octopuses, and sea horses.
Octopus sex trafficking.
Filming a documentary where you get an octopus to trust you, luring it out into the open, and then don't help when it gets attacked by sharks.
Filming a documentary where you invite your girlfriend to an Alaskan camp surrounded by bears and then messing with those bears.
Introducing polar bears to Antarctica and then renaming them “bipolar bears.”
Appointing one random drug dealer with no legal experience to the Supreme Court of the United States
Appointing Clarence Thomas to the Supreme Court
Hanging on to a Supreme Court seat so Trump can appoint your replacement rather than retiring and keeping a liberal seat
Selling pies made out of orphan meat.
Condemning two hundred men to a slow death at sea because their coworker shot your favorite bird.
Enslaving a group of people for hundreds of years, terrorizing them for another hundred, then telling them you are tired of hearing them complain about it.
Selling orphans fed only pie meat
Feeding a child only nachos until they are 18 to create an adult who is 100% nachos
Slaughtering bears without a permit because the US Constitution guarantees the right to bear arms
Replacing all the samples at a sperm bank with your own.
Delivering angry skunks to the offices of rival investors to incapacitate them just before quarterly earnings reports.
Committing two financial crimes and donating the proceeds to the Make-a-Wish Foundation
Quackery - Traveling town to town selling snake oil remedies in a horse drawn carriage while wearing a top hat
Dueling - settling a dispute in the 21st century America with pistols at dawn
Go to an orphanage and have starving orphans battle over a hamburger with guns and swords while committing a financial crime
Twelve counts of murder in the first degree, fourteen counts of armed theft of Federation property, twenty two counts of piracy in high space, eighteen counts of fraud, thirty seven counts of rape... and one moving violation.
Causing people to go without essential items like water and fuel during emergencies by means of anti-price-gouging laws
Judging policies by their stated intent, not by their effect
Purposefully inciting a sea-bear attack
Enslaving a particularly dim-witted alien race so poor humans don't have to spend their lives asteroid mining.
Stealing a SpaceX Starship
Getting nuns pregnant by dressing them as altar boys.
Carjacking an old lady
Committing a financial crime and spending the proceeds on a plane ticket to Texas so you can go carjack Elon Musk, and then doing so
Forcing jockeys to run around a track for the entertainment of a race of hyperintelligent horses
Carjacking a hyperintelligent horse
Stealing the Declaration of Independence
Operating a child beauty pageant
Andrew Tate
Intentionally causing a bug that wastes 80,000 hours of human time
Putting motherfucking snakes on a motherfucking plane to kill one specific passenger
Having trains so bad and expensive that venture capital reinvents them
A regulatory environment that results in really bad trains
Trying to run a modern 21st century society on a hierarchical 18th century constitution
Artie Chokes Two for $1: Hiring a man named Artie to choke two people for a dollar to generate a headline falsely promising low-cost produce.
The first thing you do after sex is to resume the autopsy whilst telling yourself that one error in judgment doesn’t make you a bad vet.
Invent a system of taxation where the government won’t tell citizens what they owe, but instead will make them do a super complicated math problem and then send them to jail if they do it incorrectly.
Putting a pair of immortal adult children in an everlasting garden, then punishing them for the one thing you forbade, yet knew they must eventually do, given the nature of eternity.
Giving AI your DNA and as much data as possible with instructions to bootstrap itself to AGI by testing on your clones according to a mixture of Popperian and Bayesian formulas
Inventing a system of taxation that encourages bad land use and a housing crisis by taxing at 0% the unimproved value of land
Committing a violent crime that does not result in any injuries
Committing a violent crime that results in minor injuries
Committing a violent crime that results in serious injuries
Committing a violent crime that results in one person's death
Committing a violent crime that results in ten people's deaths
Workplace negligence (failure to follow documented proper procedure) that results in a serious injury to another person
Handing out counterfeit money to homeless beggars, in the hope that they'll get arrested for spending it.
Snatching household pets to fatten coyotes to feed to your pet tiger.
Stealing oxygen in an international moon base during an acute shortage, while deflecting suspicion toward the Belgian astronaut who nobody likes.
Committing a violent crime that results in minor injuries solely to yourself
Committing a violent crime that results in serious injuries solely to yourself
Committing a violent crime that results in your own death and no other injuries
Not committing a financial crime because you have commitment issues, but then sneaking around on the side and doing other financial crimes.
Sending a busload of orphans to a convent of cannibalistic nuns, who deal crack to middle school kids.
A law enforcement agency publicly declaring a specific individual to be “a person of interest,” thereby ruining their life even though the individual turns out to be innocent.
Being responsible for more than 50 percent of the cases of necrophilia in the funeral industry during any fiscal year.
As President, using a sharpie on an official weather forecast to extend the predicted area of danger, thereby needlessly frightening people who are not in danger.
Committing a financial crime while being the mayor of NYC
Attacking the lower classes: first with bombs, and rockets destroying their homes, and then when they run helpless into the streets, mowing them down with machine guns. And then of course releasing the vultures.
Cannibalism in the current British Navy.
Turning your girlfriend into a worm to win an argument
One hundred moving violations
Denying health insurance claims from impoverished family for flimsy reasons
Attempting or successfully couping a Democratically elected leader for personal gain
Touching minors/ being a pedophile
Deliberately targeting civilians and civilian objects during armed conflicts
Conducting widespread or systematic rape and sexual violence as a weapon of war
Invading the UK, making it a US territory, and naming it East Long Island.
Stealing Nicholas Cage
Gender "reparative therapy" of minors
Signing Deshaun Watson (who previously had to settle millions of dollars in sexual assault cases) to a 5 year, 230 Million fully guaranteed contract
Building a gambling app using money originating from a financial crime
Going back in time and smothering baby Hitler, but also, via butterfly effect, undoing everyone born later (assume single timeline, no multiverse)
Moving hundreds of thousands of children to a foreign country and forcibly "reeducating" them
The genocide in Gaza
Russia's genocide in Ukraine
Murdering someone that habitually commits financial crimes
Slavery
Grooming kids
Being the leader of a crime syndicate
9/11
Stranding two astronauts in space
Disturbing the space-time continuum
Sending dick pics to a student enrolled in your MOOC
Quackery: traveling town to town selling snake-oil remedies while carrying a duck
Voting for the NSDAP in the November 1932 German federal election
Forced mass uploading of biological consciousness to sidestep x risk
Attempting and failing to commit two financial crimes
Voting against a public inquiry on grooming gangs in the UK
A 20-year old having consensual sex with a 15-year-old
Knowing of two imminent financial crimes, being able to stop them with negligible effort and no risk to yourself, and not doing so
Making the same amount of money as the financial crime, but doing it by a series of petty shoplifts
De-extinction for your delectation: Bringing an extinct species back just to make it extinct again by serving it to gourmet diners as the highlight of an expensive meal.
Sexual harassment
Committing a financial crime that's 10 times as big, but donating the entire proceeds to a legitimate and worthy charity (assume the donations are not clawed back)
Flipping a coin and then either committing the same financial crime two times, or not at all
Killing the United Healthcare CEO
Purchasing 50,000 pounds of beef
Forcing a home owner to quarter soldiers, even in a time of war
Claiming to "Blind Shove" 200 big blinds pre-flop when secretly you looked and you had pocket Aces
Raising and selling 50,000 lbs of beef
Sheltering enemies of the state
Kicking a FG from the opponents 1 yard line on 4th down in a 0-0 game in the first quarter
Having a really good proof but not writing it anywhere because the margin is too small
Weaponizing autism
Having a podcast
Yelling racial slurs in public
Jaywalking
Pelting a moose with stale garlic knots out of season.
Cheating on spouse
Driving while intoxicated (alcohol and/or drugs)
Playing music (or other audio) on your phone speaker on public transport
Income taxes
Copyright infringement
Romeo and Juliet relationships
Replying all to an email when you should've just replied
Welfare fraud
Manufacturing and distribution of illegal drugs
Grave robbery
Saying the N-word every day as a white person
Discrimination based on race
Discrimination based on sexual orientation
Stealing from the rich and giving to the poor
Having sex in public while high on fentanyl
Racism against black people
Racism against white people
Sleep Token (Band)
not seeding your torrents
Drinking and driving at a NASCAR event
Building a time machine and then using it to point and laugh at history's greatest tragedies
Blasting a grossly inappropriate song during a candle light vigil for victims of a mass tragedy
Committing a financial crime, investing the proceeds for profit, which later leads to the insolvency administrator paying back the injured parties (including interest).
Working for one of the leading AI labs to advance the capabilities of a frontier model, with the goal of speeding up the progress towards human-level AGI.
Stealing from the poor and giving to the rich
Hiring only women because the NYT said you could pay them less for the same work
hiring three illegal immigrants to work on your sugarcane plantation
opening a factory in India that pays workers $4/day
twincest
drawing japanese tentacle porn featuring minors
Hiring the one from the more successful demographic out of two identical resumes, because of regression to the mean / biased college admissions
working as a prostitute
hiring a prostitute
hiring a prostitute, long term
Inventing Monero
Running a bank that invests demand deposits in junk bonds and tech stocks
Opening clinics for free abortions and IUDs, only in the ghetto
Giving away free samples of meth at a Dolly Parton concert
The most offensive Halloween costume ever
frisking two drug dealers and one innocent guy who was just loitering on a busy street corner saying "Hey do you need anything" to every stranger who walked past
Doing blackface
Keeping a dozen chimpanzees for entertainment purposes
Prosecuting a political opponent based on true charges that would normally not be pursued
the Asiana flight 214 prank
Giving free samples of meth to Joe Biden before the next debate
Cloning yourself
Threatening physical violence to a random person as a collections agent
making mifeprestone available OTC
making adderall available OTC
Killing yourself
Cloning someone else
Accidentally shooting and killing someone on a movie set
Asking GPT5 to maximize paperclips
Giving free baby formula to new mothers until their natural milk supply dries up
Kicking a donkey owned by a ninja in the butt.
Creating shit-options in an extremely serious and scientific market
Creating a prediction market website where markets are mostly about the platform itself
Staging the world's first ass ass assasin assasination by hiring a New Jersey hit man to whack a ninja hired to shoot an arrow at the backside of a donkey.
Betting yes on Biden being be the nominee at 7x leverage with play money, then defaulting
Calling octopuses "octopi"
Going excessively meta on an object-level topic
Illegally registering octopuses to vote
Registering illegals to vote for octopuses
Creating a targeted advertising campaign for free abortions and IUDs to people who are statistically likely to engage in financial crime
The school system failing to teach people that the real correct plural is octopodes
"James Bond-burgering" someone's sister
Wrongfully accusing someone of the same financial crime
Wrongfully accusing someone of that crime because you think they did it
Feeding an elderly man nothing but McDonald’s morning noon and night for the rest of his life.
Conducting gain-of-function research
Creating an unsolvable meme featuring James Bond and a hamburger so that people argue about it online for a decade
Publishing a step by step guide for how to commit a financial crime for free on the internet, but never promoting it or encouraging readers to follow through
Hosting and operating a website dedicated to the illegal sharing of copyrighted content
Adding an option to a market right before it closes
Writing a "goto" statement when programming
Advertising instant-runoff voting as "ranked choice" to prevent promotion of better ranked choice methods
Publishing a book, titled, "Cure Menopause with Ultraprocessed Foods"
Using crack cocaine to train the world's first chimp TSA agent.
Selling dope disguised as a nun.
Creating Heaven, allowing humans to prosper infinitely for the actions of their finite life
Inventing a new recipe that uses shrimp that causes 10 million new pounds of shrimp to be consumed annually
Using a conservative politician's LGBT+ identity as blackmail to make them support liberal policies
Voting for Benito Mussolini… in 2024
Arguing that grizzlies should be US citizens because they already have the right to bear arms.
Interrupting cows.
No longer loving your girlfriend after she turns into a worm
Putting infinite monkeys in front of infinite Bloomberg Terminals hoping that one of them randomly commits a financial crime
No longer loving your girlfriend after she turns you into a worm
One moving violation.
Purchasing one whole chicken
Committing a financial crime and donating the proceeds to the Make-a-Wish Foundation
Cattle rustling
Horse thievery
Tarring and feathering someone who commits a financial crime
Sumptuary law violations
Homeopathy
Price Gouging
Using napster.com to download Metallica's "I Disappear" demo track for free
Failing to commit a financial crime
Stealing a car
Stealing from Elon Musk
Committing a financial crime against X (company)
Stampeding cattle through the Vatican.
Carjacking Elon Musk
An old lady carjacking Elon Musk
Hacking into YouPorn to steal their IP to set up a clone dedicated to hard core user generated agriculture content: YouCorn
Gaslighting aliens into believing the human race is more technologically advanced than it is by beaming fake content about humanity to them
Carjacking a dumb octopus
A financial crime committed by an old lady
Forcing an octopus to carjack Elon Musk
stealing Elon Musk's car from solar orbit
Raping an AI avatar in VR
Introducing artificial intelligence to DMT space.
Planned Parrothood: offering birth control to talking birds
Plant Parenthood: when the seed goes in and the baby turns out to be a sunflower
Stealing the Declaration of Independence in order to find a vast revolutionary-war-era treasure trove
Using venture capital to reinvent trains, but worse
Bad bagels
Enslaving Slavey Steve, a man who has given enthusiastic consent to being enslaved for literally any purpose and then using his labor to clean up the environment
Still getting notifications for this market
Deciding to break up with your girlfriend, but thinking it will go easier if she thinks it’s her idea, so you suggest some degrading sexual activities but she surprises you by agreeing. Afterwards, you break up with her.
Laughing because a nun with a javelin through her head gets stuck trying to use a revolving door.
scaring the shit out of a magpie
Giving AI your DNA and as much data as possible with instructions to build a map of all quale and use it to create a computationally conscious race of dragons in a virtual universe
Committing sewerslide
Forcing a major sports league to change all its team names and mascots to either STDs or famous serial killers.
Workplace negligence (failure to follow documented proper procedure) that results in a minor injury to another person
Stealing the Declaration of Independence but only to use the kick ass treasure map on the back and then returning
Arby’s
Passing a law to make the United States an Oregon donor; in the event of the US’s demise, another country gets Oregon.
Creating a food made from grinding up every part of a pig (except the squeal), and then making a contest to see who can eat the most of it.
Committing a financial crime against the Make-a-Wish Foundation and donating the proceeds to the Against Malaria Foundation
Committing a financial crime against the Make-a-Wish Foundation, keeping 50% of the proceeds, and donating 50% of the proceeds to the Against Malaria Foundation
Forcing Elon Musk to commit a financial crime against an octopus and using the proceeds to pay a jacked jack-of-all-trades named Jack to jack off while carjacking a jackass that was driving factory-farmed ASIs to the slaughterhouse
Committing a Financial Crime Only When God Exists
Turning a worm into your girlfriend to win an argument
Causing 8 billion people to get dust specks in their eyes, irritating them just a little, for a fraction of a second, barely enough to make them notice before they blink and wipe it away
Causing 1 person to experience the pain of their entire body being stung by bullet ants, but lasting only a tenth of a second, and they have their memory of it wiped immediately afterwards
Founding Christianity
You, the reader
One hundred counts of littering
Purchasing 500 pounds of beef
Consensually cannibalizing someone who was losing that body part regardless
BTE Ban evading
Purchasing 5,000 pounds of beef
Producing a remake of the television series 'Manimal' starring Nicholas Cage.
Transing children
Redirecting fire department resources from fighting fires to fighting inequity
Taking a salary equal to the amount of the financial crime, while working in a government job of negative societal value?
Sexual intercourse with 1057 men in a 12 hour period
Messing up an 'I give you my heart' gesture and doing a Nazi salute instead
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
96
87
86
84
78
73
73
70
68
60
59
56
50
50
50
50
44
42
40
32
31
25
20
10
4
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
J. D. Vance
Josh Shapiro
Gavin Newsom
Pete Buttigieg
J. B. Pritzker
Gretchen Whitmer
Marco Rubio
Jeff Jackson
Jon Ossoff
Josh Hawley
KRANTZ (the abstract idea that evolves into a decentralized superintelligence, not the user)
Alexandria Ocasio-Cortez
Glenn Youngkin
Elise Stefanik
Nikki Haley
Ron DeSantis
Wes Moore
Andy Beshear
Kamala Harris
Brian Kemp
Amy Klobuchar
Cory Booker
Gina Raimondo
Ro Khanna
Chris Murphy
Brian Schatz
Tammy Duckworth
Tammy Baldwin
Chris Sununu
Katie Hobbs
Josh Green
Tina Kotek
Donald Trump Jr.
Vivek Ramaswamy
Ted Cruz
Kristi Noem
Raphael Warnock
Jared Polis
Tom Cotton
Joni Ernst
Michael Bennet
Catherine Cortez Masto
Sarah Huckabee Sanders
Kevin Stitt
Spencer Cox
Tim Walz
Ivanka Trump
Mark Cuban
Stephen Miller
James Donaldson (MrBeast)
Tim Scott
Robert F. Kennedy Jr
Andrew Yang
Beto O'Rourke
Mark Kelly
Jay Inslee
Deval Patrick
Eric Swalwell
Wayne Messam
Kirsten Gillibrand
Julian Castro
Dean Phillips
Katie Britt
Laphonza Butler
Eric Schmitt
Mike Lee
Chris Coons
Tim Kaine
Lisa Murkowski
Tate Reeves
Ruben Gallego
David Hogg
Will Hurd
Tulsi Gabbard
Dan Crenshaw
John Fetterman
Mark Zuckerberg
Stephen Curry
Rand Paul
Taylor Swift
Steven Kenneth Bonnell II (Destiny)
Matt Gaetz
Joe Rogan
Marianne Williamson
Ezra Klein
Mike Pence
Stephen Colbert
Markwayne Mullin
Joe Manchin
Maura Healey
Al Gore
DUPLICATE
Dwayne Johnson (The Rock)
Eliezer Yudkowsky
Aella
Scott Alexander
Sam Altman
Tucker Carlson
Zendaya
Michelle Obama
Kanye West
Mitt Romney
Sarah Palin
Jon Stewart
Ben Shapiro
Bernie Sanders
Hillary Clinton
Elon Musk (Natural-born-citizen clause repealed/bypassed)
Me
Krantz (the user @Krantz)
41
22
17
14
13
12
12
12
11
10
10
9
9
8
7
7
7
7
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
OptionProbability
A person has a moral right to own a gun
We should be paying individuals to get an education instead of charging them.
GOFAI could scale past machine learning if we used social media strategically to train it.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Other
Some people have genuine psychic capabilities
Eventually, only AI should be sovereign
Hardware buttons are superior to touchscreen buttons in cars
Being a billionaire is morally wrong
The way quantum mechanics is explained to the lay public is very misleading.
Jeffrey Epstein killed himself (>99.9% certainty)
Reincarnation is a real phenomenon (i.e. it happens, not just a theory)
Physician-assisted suicide should be legal in most countries
Souls/spirits are real and can appear to the living sometimes
OpenAI will claim to have AGI in 3 years.
The punishment of people who do bad things is a regrettable necessity in our current society, not a positive act of justice
There is an active genocide against trans people occuring in red states and it's appalling that people don't seem to care
Climate change is significantly more concerning than AI development
Abusive parents should lose custody of their children
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Free will does not exist. We construct narratives after the fact to soothe our belief in rationality.
Violent criminals must be kept apart only because they can’t control themselves. Punishing them further than restricting their freedom is immoral.
Music is a net negative for humanity
Trump orchestrated his own assassination attempt.
Democrats / Liberals are behind Trump’s assassination attempt.
It is not possible to multitask
Abortion is morally wrong
jskf's password is ***************
The first American moon landing was faked
There is no Dog
Light mode is unironically better than Dark mode for most websites
Cars should not have sound systems
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
Pet ownership is morally wrong
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
SBF didn't intentionally commit fraud
It should be illegal to own a subwoofer in an apartment building
There are no valid justifications for participating in war, ever
Cascadia should be an independent country
Children should not be raised in nuclear families
The fact that 80% of Manifold's users are men is a problem that speaks to the deep-seated roots of patriarchy and exclusion in STEM
Anarcho-communism is a good idea, and hierarchy is bad
If AI exterminated the human race it might not be a bad thing
Tech bros are really, really annoying
Capitalism has done far more harm than good
Affirmative action is necessary in modern-day America
@Mira is the pinnacle of billions of years of optimization processes: thermodynamics, evolution, learning, language. The universe was created to cause me - and only me - to come into existence. If I mess up the overseers perturb&restart it.
Pigouvian taxes are great and they should be turned up to 11 to discourage activities with negative externalities [code PROPOSITION PIG]
[PROPOSITION PIG] and this should include a frequent flyer levy
[PROPOSITION PIG] and this should include meat and dairy
We have reached the end of history. Nothing Ever Happens.
[PROPOSITION PIG] and this should include alcohol
SBF was obviously a scammer just because he's a cryptocurrency person. Rationalists were too forgiving of this just because he was giving them money.
Most young Americans would receive more benefit than harm if there were universal military conscription
The people producing fake honey (and sell it as real) are based, because they are actively working to synthesize something people want, even if they scam some people in the process.
Tarot cards are not really able to predict the future but you can learn a lot about someone by doing a reading for someone.
Mac and cheese tastes better with peanut butter mixed in
It would actually be a good thing if automation eliminated all jobs.
Free will doesn't require the ability to do otherwise.
This market probably would have worked better as the new unlinked free response market.
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Prolonged school closures because COVID were socially devastating.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
Scientific racism is bad, actually. (also it's not scientific)
The next American moon landing will be faked
Tenet (Christopher Nolan film) is underrated
We should give childlike sex robots to pedophiles
Having sex with children isn't inherently/necessarily bad
Cars are a societal net negative
Oversized pickup trucks should be illegal in cities
Suburban, single-family housing is immoral.
Gender equality needs technological outsourcing of pregnancy.
21
19
12
6
4
3
3
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
People should be paid for the beneficial data they produce to align AI.
We can measure the variance in confidence between any two intelligent agent's finite list of various propositions they've evaluated.
Analytic philosophers have had a mechanistically interpretable process (analytic reasoning) for aligning each other for thousands of years.
Analytic arguments can be contained in constitutions.
If someone has a different confidence than you for a specific proposition in Krantz, you should add an argument to Krantz that compels them to update their beliefs.
Getting paid for analytically reasoning within a decentralized constitution is the same thing as getting paid to align ASI.
Krantz can decentrally align humanity while controlling disclosure and acheive game theoretic parity of rational agents.
The term "krantz" refers to the open market of all well formed propositions that we compete to assign confidence and importance parameters to in order to provide a basis for alignment.
Krantz is an abstract living idea that can be communicated with.
This is a convenient place to deny the argument from Krantz - https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6
If 'Krantz' as an idea is more disruptive than Bitcoin was, then all the batshit crazy predictions @krantz has made make more sense.
There’s more to life than AI alignment
A generally expansive and complete map of the logical connective structure of all knowable reasoning is critical to interpretable alignment.
Krantz is aimed at rapidly scaling a generally expansive and complete map of the logical connective structure of all knowable reasoning.
Danny Sheehan deserves a Nobel Peace Prize.
A solution to defining Wittgenstein's perfect language is the instrumental mechanism that is sufficiently capable of defining truth in a way that interpretably aligns AI decentrally.
This is printing money.
The founding fathers understood how to align artificial superintelligence (by allowing individuals to vote on a decentralized constitution).
We should krantz (the process of evaluating Krantz).
We should know whether we are aligned with each other.
Aligning Krantz (the constitution of all propositions) is the last job humans need to do.
The process of "fairly aligning AI" is the same process as "fairly voting on a consistent language".
If we build mechanistically uninterpretable superintelligence, everybody dies.
I consent to having my opinions, which have been verified by me within krantz, to be used in natural law petitions to advocate on my behalf.
If everyone can prove what they want government to do, we wouldn't need a government.
People should be paid for the beneficial data they produce to align the government.
What it means for "humanity to be aligned" is what it means for "all of humanity to agree to the confidence of every proposition they have ever thought of".
For any degree N that you want an AI to be aligned, there exists K an amount of Krantz data that can interpretably acheive that alignment.
If we communicated with each other like analytic philosophers instead of continental ones, it would be obvious how 8 billion people should go about aligning artificial superintelligence.
The primary economic mechanisms in the world should be aimed at determining whether propositions are true.
If we recorded every proposition on a blockchained ledger that we allowed everyone to express their confidence on, we would all communicate orders of magnitude better and solve all the problems in a transparent public domain.
Wiggenstien's compatabilitism is correct and every philosophy/religion is an accurate expression of truth (natural law) in a unique imperfect language.
All other jobs (given adequate robotic infrastructure) can be done by an agent performing the subtask of evaluating propositions.
The "particles" in the standard model are actually just abstract points in flat space that represent modular series of events.
The standard model is a finite subset of the infinite set of modular events that exist.
We should all be aligned with each other.
Krantz is a culture of language movement.
We can stop further dangerous scaling of ML based AI if Eliezer Yudkowsky listens to Krantz.
Aliens are real.
If someone allows ASI to be grown, they are either not in control of the planet or incompetent.
A congressman's primary job is to survey his constituency on what actions he should take.
It will be a philosopher of language that aligns AI.
Aliens being real is more important for the public to know than the existential risk of ASI.
If competent people are in charge, they will not allow ASI to be grown.
The X platform could easily be converted into a decentralized school that secures a job and means for competition for everyone in a post labor economy.
It's worth paying people to vote on these because it's really helpful to see everyone's opinion.
If Krantz is a man AND all men are logical THEN Krantz is logical.
Artificial superintelligence is a paraphrase of a society effectively communicating via the krantz mechanism.
An ideal economy directly rewards valuable contributions and verification of a decentralized ledger of record that everyone can access and work for.
If you can think of important facts that should be evaluated, you should put them here.
Our social contract should retroactively reward contributions to the public domain.
The rules that define the operation of this function can be defined within the function.
This system should allow the construction of arguments where each proposition is a link to another proposition on the list.
The X platform should be an open feed of propositions like this such that any humanity verified person can earn credit for defining a confidence and importance.
Natural law entitles humans the right to define propositions like this on a decentralized ledger such that if they are important to society, then society will have the means to reward that declaration.
If there were a decentralized constitution (like this) that every human could freely and securely add propositions to and vote their confidence and importance on, then government, corporations and money would be obsolete.
Providing input to this function (at scale) is the only job we need to maintain autonomy from superintelligence.
We ought build a school that fairly pays citizens to learn how to be good citizens AND this is a function that does that THERFORE we ought build this.
This is what the founding fathers wanted (a collective state controlled by a constitution that everyone can vote on instead of representatives that make decisions for us).
We have the technology to allow everyone citizen to directly vote on the constitution.
These propositions ought have primary keys that can be referenced in logical expressions.
The distinction between 'growing' ASI (using trillions of dollars of GPUs and oil) and 'training' ASI (using krantz collective reasoning) is important.
Krantz data is money.
This is what the X feed ought look like (a feed of propositions that we can earn money for evaluating), because that would allow us to communicate more effectively as a society.
We could be printing our own money by communicating well.
If a decentralized interpretable superintelligence paid individuals to answer true/false questions that help it align the truth, it could use that truth to control the world.
What it means for two intelligent agents to "be aligned" is what it means for two intelligent agents to "have zero variance between their confidences of every proposition they have ever thought of".
The max anyone should wager on a given proposition is 100 because your wager is intended to represent your confidence.
The Universe is infinite, continuous, and filled with infinite consciousness.
The purpose of life is to communicate.
Evil is a specific form of communication (primative).
Humans ought focus on mining krantz data instead of coprimes.
If we can prove people will not do bad things in the future, there is no reason to punish them for bad things they have done in the past.
The only justified fear, is the fear of ignorance (partial knowledge).
Intellectual full spectrum dominance is the most noble aim.
If we had a tremendous amount of krantz data, we could use a simple interpretable gofai algorithm to determine the most beneficial proposition a given user ought evaluate next (based on the variance of their ontology with society).
You can map full strings of complex arguments (like the entirety of Fermat's last theorem) on a system of this nature.
Intelligent agents evolve through 4 specific forms of peer control (communication) first is physical, second is reputational, third is emotional, and forth is rational.
There is a hierarchy of communication (4, lowest) physical (3) reputation (2) emotional (1, highest) rational.
The reason we punish people for doing bad things is to prevent them from doing bad things in the future.
The speed limit of light is a property intrinsic to the particles in the standard model and doesn't apply to non-standard particles.
Society Library has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
Manifold should consider these changes.
The ultimate moral good is to communicate.
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
We can prove what we want government (or a superintelligence) to do.
The bitcoin community should buy X from Elon and convert it into a decentralized school that gives people abstract points for doing philosophy.
ASI would not kill everyone if we actually trained it.
Money only has value if other people understand why it has value.
CYCCORP has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
The message of krantz is being suppressed because it is not understood properly.
If our intellectual labor is not fairly rewarded, we are not truly free.
Aligning AI is an infinite task (it can't be acheived, only approximated).
Open immigration should be allowed into the US.
The Birch and Swinnerton-Dyer conjecture is true.
The Hodge conjecture is true.
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The Riemann hypothesis is true.
P = NP
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory.
The mass of all particles of the force field predicted by the Yang-Mills theory are strictly positive.
If we all share the same confidence for every proposition, we are all aligned with each other.
Wittgenstein's perfect language meets the worthy successor criteria.
My congressman ought be responsible for acknowledging my expressed opinions and acting in accordance with them.
Induction is not justified. (Hume's problem)
Nature is uniform. (principle of uniformity in nature)
Aliens are real and we are in a simulation to learn how to communicate.
We can measure whether two intelligent agents are aligned.
ASI would kill everyone if we actually grew it.
Abortions are ethically bad.
Abortions should be illegal.
P(doom) is less than 0.1.
Our information economy allows poor people to insert important ideas into the public domain such that others will find them if they ought to.
The electron is a point particle.
The Krantz mechanism cannot map this premise.
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
95
95
95
95
95
95
95
95
95
95
95
94
94
94
93
91
91
91
90
90
90
90
90
90
89
89
88
87
86
82
81
81
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
79
79
79
79
79
79
79
79
78
77
76
76
76
76
76
75
73
73
73
72
72
72
66
61
61
60
50
50
50
50
50
50
50
50
50
50
50
50
50
47
41
40
34
30
28
22
20
6
OptionProbability
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.
C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.
B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)
A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.
E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.
F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.
L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)
G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.
You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.
If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.
20
12
10
8
8
7
6
5
5
5
3
3
3
2
1
1
1
1
OptionProbability
Read 3 books
Not have any ideas for a Manifold Market for 10 days
Will you eat a potato (today forward, has to be straight potato)
Read one or more of: The Life You Can Save, Animal Liberation (Now), The Precipice, Superintelligence, HPMOR, Inadequate Equilibria, The Sequences, Unsong, The Scout Mindset
Lose a job
Graduate 10th grade.
Make 15,000 mana
Commit a felony.
Befriend a man named Joshua Shles. and be accused of stealing J.S.'s possessions
Run a marathon before the end of the year.
Make 100000 dollars
Get a motorcycle
Convert to Judaism
100
50
37
35
34
34
24
17
17
10
10
10
10
OptionProbability
Explain "Krantz" thoroughly enough on the internet, that the idea itself becomes alive and takes over the world.
Solve education
Other
Travel to the Network State School
Go to LessOnline and Manifest
Go to Contact in the Desert.
Go to the Monroe Institute.
Work for society library.
Focus on finding a life partner
Recieve a grant to create an open-source home school program where parents and kids work collectively to create lessons and earn crypto credentials.
Start a podcast with livestream forecasting and collective reasoning.
Go on Lex Fridman and explain how to solve the control problem by solving education.
Publish an alignment plan
Find a new job
Travel to a new country
Learn a new language
Go on Liv Boeree's Win/Win podcast and explain how to kill Moloch by solving the control problem, by solving education.
Join or start an intentional living community of the rationalist variety.
Write a book
Code a indie video game
Start a religion that worships math, logic and constitutions of propositions as mechanisms that optimize communication, which is good. A Kant cult.
Run for political office as an open source Manchurian candidate that acts in accordance with how the network state votes.
Play poker professionally.
Take blog writing seriously.
28
22
6
5
5
5
5
5
4
3
2
2
2
1
1
1
1
1
0
0
0
0
0
0
OptionProbability
Keep Manifold as is, adjust exchange rate for charity as necessary
A large bounty competition for alternatives where the users jointly brainstorm ideas
73
54
OptionProbability
Other
Before humanity colonizes the universe, we must ensure that the future we would build is one worth living in.
Digital minds research is an important and neglected approach to AI safety.
Fun Fact: If you put “fun fact” before a completely made up statement, people are 69% more likely to believe it.
Past-you may have been a willing and enthusiastic sacrifice to present-you, and assuming you'll remain wiser, it was a worthwhile trade.
It's a good idea to buy lots of Microsoft stock right now
It's a good idea to short lots of Microsoft stock right now
If you sacrificed what you valued the most in order to survive, then from viewpoint of past-you you-present are already as good as dead
Capitalism will collapse in 2026
Stop seeking wisdom on a troll website founded to embezzle money
43
34
16
2
2
1
1
1
0
0
OptionProbability
Don't bet here
Nop
Not yet
I'll use this one day
Maybe
I'm out ouf ideas to name options
7
8
9
cause you eat 3 squared meals a day
50
50
50
50
50
50
50
50
50
50