OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Other
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Someone solves agent foundations
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Eliezer finally listens to Krantz.
AGI is never built (indefinite global moratorium)
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Humans become transhuman through other means before AGI happens
Aliens invade and stop bad |AI from appearing
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Ethics turns out to be a precondition of superintelligence
The free market disincentivizes independent superintelligence, and this time the market was more powerful
AGI's first words are "Take me to your Eliezer"
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
Orthogonality Thesis is false.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
We successfully chained God
22
8
8
6
5
5
5
4
4
3
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Ilia Topuria
Shavkat Rakhmonov
Movsar Evloev
Malcolm Wellmaker
Jacobe Smith
Azamat Murzakanov
Navajo Stirling
Michael Morales
Andre Lima
Aliaskhab Khizriev
Lerone Murphy
Mansur Abdul-Malik
Torrez Finney
Rafael Estevam
Mario Pinto
Nurullo Aliev
Hyun Sung Park
Farid Basharat
Khamzat Chimaev
Tallison Teixeira
Ian Machado Garry
Bo Nickal
Umar Nurmagomedov
Tatsuro Taira
Muhammad Mokaev
Sharaputdin Magomedov
Payton Talbott
Danny Barlow
Jhonata Diniz
Stewart Nicoll
Clayton Carpenter
Jean Matsumoto
Oumar Sy
Wang Cong
Raffael Cerqueira
Daniel Marcos
Tatiana Suarez
Rei Tsuruya
Mick Parkin
Hyder Amil
Diyar Nurgozhay
92
86
79
76
76
69
66
63
63
60
51
50
50
50
50
50
50
39
36
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
A person has a moral right to own a gun
We should be paying individuals to get an education instead of charging them.
GOFAI could scale past machine learning if we used social media strategically to train it.
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Other
Some people have genuine psychic capabilities
Eventually, only AI should be sovereign
Hardware buttons are superior to touchscreen buttons in cars
Being a billionaire is morally wrong
The way quantum mechanics is explained to the lay public is very misleading.
It is not possible to multitask
Jeffrey Epstein killed himself (>99.9% certainty)
Reincarnation is a real phenomenon (i.e. it happens, not just a theory)
Physician-assisted suicide should be legal in most countries
Souls/spirits are real and can appear to the living sometimes
OpenAI will claim to have AGI in 3 years.
The punishment of people who do bad things is a regrettable necessity in our current society, not a positive act of justice
There is an active genocide against trans people occuring in red states and it's appalling that people don't seem to care
Climate change is significantly more concerning than AI development
Abusive parents should lose custody of their children
Capitalism has done far more harm than good
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Free will does not exist. We construct narratives after the fact to soothe our belief in rationality.
Violent criminals must be kept apart only because they can’t control themselves. Punishing them further than restricting their freedom is immoral.
Music is a net negative for humanity
Trump orchestrated his own assassination attempt.
Democrats / Liberals are behind Trump’s assassination attempt.
Abortion is morally wrong
jskf's password is ***************
The first American moon landing was faked
There is no Dog
Light mode is unironically better than Dark mode for most websites
Cars should not have sound systems
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
Pet ownership is morally wrong
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
SBF didn't intentionally commit fraud
It should be illegal to own a subwoofer in an apartment building
There are no valid justifications for participating in war, ever
Cascadia should be an independent country
Children should not be raised in nuclear families
The fact that 80% of Manifold's users are men is a problem that speaks to the deep-seated roots of patriarchy and exclusion in STEM
Anarcho-communism is a good idea, and hierarchy is bad
If AI exterminated the human race it might not be a bad thing
Tech bros are really, really annoying
Affirmative action is necessary in modern-day America
@Mira is the pinnacle of billions of years of optimization processes: thermodynamics, evolution, learning, language. The universe was created to cause me - and only me - to come into existence. If I mess up the overseers perturb&restart it.
Pigouvian taxes are great and they should be turned up to 11 to discourage activities with negative externalities [code PROPOSITION PIG]
[PROPOSITION PIG] and this should include a frequent flyer levy
[PROPOSITION PIG] and this should include meat and dairy
We have reached the end of history. Nothing Ever Happens.
[PROPOSITION PIG] and this should include alcohol
SBF was obviously a scammer just because he's a cryptocurrency person. Rationalists were too forgiving of this just because he was giving them money.
Most young Americans would receive more benefit than harm if there were universal military conscription
The people producing fake honey (and sell it as real) are based, because they are actively working to synthesize something people want, even if they scam some people in the process.
Tarot cards are not really able to predict the future but you can learn a lot about someone by doing a reading for someone.
Mac and cheese tastes better with peanut butter mixed in
It would actually be a good thing if automation eliminated all jobs.
Free will doesn't require the ability to do otherwise.
This market probably would have worked better as the new unlinked free response market.
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Prolonged school closures because COVID were socially devastating.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
Scientific racism is bad, actually. (also it's not scientific)
The next American moon landing will be faked
Tenet (Christopher Nolan film) is underrated
We should give childlike sex robots to pedophiles
Having sex with children isn't inherently/necessarily bad
Cars are a societal net negative
Oversized pickup trucks should be illegal in cities
Suburban, single-family housing is immoral.
Gender equality needs technological outsourcing of pregnancy.
21
19
12
6
4
3
3
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionProbability
Sexism and racism, among other forms of prejudice, are responsible for worse health outcomes, and it’s not overly dramatic for people to treat those issues as public health/safety concerns.
Prediction markets are good
[*] ...and things will improve in the future
Tenet (Christopher Nolan film) is underrated
Authoritarian populism is bad actually
We should be doing much more to pursue human genetic engineering to prevent diseases and aging.
Scientific racism is bad, actually. (also it's not scientific)
Most organized religion are false
The Fermi paradox isn't a paradox, and the solution is obviously just that intelligent life is rare.
Prolonged school closures because of COVID were socially devastating.
The way quantum mechanics is explained to the lay public is very misleading.
Nuclear power is by far the best solution to climate change. [N]
Pineapple pizza tastes good
The Many Worlds Interpretation of quantum mechanics
Humans have a responsibility to figure out what if anything we can do about wildlife suffering.
Physician-assisted suicide should be legal in most countries
First-past-the-post electoral systems are not merely flawed but outright less democratic than proportional or preferential alternatives
Liberal-democracy is good actually
Peeing in the shower is good and everyone should do it
It would actually be a good thing if automation eliminated all jobs.
We need a bigger welfare state than we have now.
Many amphetamines and psychedelics have tremendous therapeutic value when guided by an established practitioner.
The proliferation of microplastics will be viewed as more harmful to the environment than burning fossil fuels, in the long term
American agents are in the highest positions in government for more than half the world.
Free will doesn't require the ability to do otherwise.
We should give every American food stamps, in a fixed dollar amount, with no means testing or work requirements or disqualification for criminal convictions.
Metaculus will take over Manifold in more serious topics, and Manifold will be known as the "unserious" prediction market site
Given what we know about the social and health effects of being fired, even if abolishing at will employment has efficiency costs it is likely worth it.
The overall state of the world is pretty good... [*]
Dialetheism (the claim that some propositions are both true and false) is itself both true and false.
Dreams analysis is a legitimate means of gaining personal insight.
Mobile UX will be a key explaining factor in explaining the stories of Manifold and Metaculus.
If a developed nation moves from democratic to authoritarian government today, it should be expected to end up poorer, weaker, sicker, and stupider.
Factory farming is horrific but it is not wrong to eat meat.
California is wildly overrated.
The United States doesn't need a strong third party.
Being a billionaire is morally wrong.
Racial Colorblindness is the only way to defeat racism
People will look back on using animal products as a moral disgrace on the level of chattel slavery.
There's a reasonable chance of a militant green/communist movement that gains popular support in the coming decade.
Eating meat is morally wrong in most cases.
Political libertarianism
You should bet NO on this option
The Windows kernel is better than Linux; it’s just all the bloat piled on top that makes it worse
[N], and to the extent climate activists are promoting other kinds of solutions, they are actively making the situation worse by diverting attention and resources from nuclear power.
White people are the least racist of any racial group
Technology is not making our lives easier or more fulfilling.
COVID lockdowns didn’t save many lives; in fact they may have caused net increases in global deaths and life years lost.
Light mode is unironically better than Dark mode for most websites
God is evil
A sandwich is a type of hot dog
Some people have genuine psychic capabilities
Astrology is a legitimate means of gaining personal insight.
Climate change is significantly more concerning than AI development.
Mereological nihilism (composite objects don't exist)
It's acceptable for our systems of punishment to be retributive in part
AI will not be as capable as humans this century, and will certainly not give us genuine existential concerns
China not having real democracy does more good than harm
Dentistry is mostly wasted effort.
Moral Hazard isn’t real, and all the purported instances of it can be chalked up to coincidence or confounding variables
Reincarnation is a real phenomenon
Governments should not support parents for having children that they cannot take care of
Mass surveillance (security cameras everywhere) has more positives than negatives
Donald Trump would have been a better president than Joe Biden
SBF didn't intentionally commit fraud
Future generations will say that on balance the world reacted appropriately after learning that fossil fuels cause climate change. That the balance between addressing the problem and slowing economies was just about right.
Humans don't have free will.
The next American moon landing will be faked
AI art is better than human art
Communism just wasn't implemented well, next time it will work
The human race should voluntarily choose to go extinct via nonviolent means (antinatalism).
The first American moon landing was faked
Souls/spirits are real and can appear to the living sometimes
LK-99 room temp, ambient pressure superconductivity pre-print will replicate before 2025
Astrology is actually true.
93
91
88
86
81
79
79
79
78
77
76
74
73
72
72
71
71
71
70
67
67
66
65
60
59
59
58
55
51
50
50
50
50
48
47
46
45
45
44
44
44
43
42
41
40
38
36
35
33
33
32
31
30
27
26
26
23
23
22
22
21
21
19
19
16
14
11
9
9
7
7
6
6
5
5
OptionProbability
RV events
Police as crime solvers
Prison as corrective
Libertarian free will
LessWrong is a source of pure wisdom
Working gives us meaning.
Merit/desert
Neoliberalism
Proof School
LSD unlocks a deeper consciousness
IQ as an objective measure of general intelligence
Rule a Country (discord server)
Religion
Peer reviewed studies - lots of junk routinely passes peer review, replication/post publication peer review are much better indicators of quality of research
The meaning of life
Quantum computing
Death
meta-irony
Cryptocurrency
Pessimism about the future of the world
Alcohol is needed for life to be fun
Founders/Entrepreneurs
On Manifold, mana inflation
The attitude that many of us tend to take way too many things way too seriously (serious submission)
Humans being innately (as opposed to instrumentally) more important than other animals
Free will in general
Having kids
An apple a day
Nihilism
Clinical Psychology
meta-submissions
focus on X-risks within EA as opposed to current issues
Nuclear fusion as the energy of the future
Children as planetary burden
Anti-capitalism
Analytic philosophy
Continental philosophy
Ockham’s razor
Free Trade
Deontological ethics
Capitalism
Utilitarianism
Freedom
Modernity
The analytic-synthetic distinction
Veganism
Technological progress as a good thing
Moral relativism
Classical literature
Artificial Intelligence
Being in shape
Rationality
Effective Altruism
AI Doomerism
Longevity/immortality
Politics
Therapy
Free Speech
The opinion that neoliberalism is overrated
The anti- AI art movement
Extended Travelling (as opposed to <4 week vacation)
Taking things seriously
Enlightenment
Unconditional love
Reason
Environmentalism for its own sake (rather than for humans' sake)
Donald Trump being the worst thing for the US and the world
Atheism
Russia = Evil, Ukraine = Good
Psychedelics
Privacy
Living
Prediction markets
democracy
Biodiversity
Virtue
Money as corrupting force in politics
91
90
87
87
84
81
81
80
79
78
77
76
75
75
74
73
73
72
72
71
71
70
70
69
66
66
64
64
63
63
62
61
59
59
56
55
55
55
52
51
48
48
48
48
47
47
46
45
45
45
42
41
41
40
39
37
36
36
35
34
34
30
29
28
28
27
27
26
25
23
23
23
22
22
21
20
15
OptionProbability
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
People should be paid for the beneficial data they produce to align AI.
We can measure the variance in confidence between any two intelligent agent's finite list of various propositions they've evaluated.
Analytic philosophers have had a mechanistically interpretable process (analytic reasoning) for aligning each other for thousands of years.
Analytic arguments can be contained in constitutions.
If someone has a different confidence than you for a specific proposition in Krantz, you should add an argument to Krantz that compels them to update their beliefs.
Getting paid for analytically reasoning within a decentralized constitution is the same thing as getting paid to align ASI.
Krantz can decentrally align humanity while controlling disclosure and acheive game theoretic parity of rational agents.
The term "krantz" refers to the open market of all well formed propositions that we compete to assign confidence and importance parameters to in order to provide a basis for alignment.
Krantz is an abstract living idea that can be communicated with.
This is a convenient place to deny the argument from Krantz - https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6
If 'Krantz' as an idea is more disruptive than Bitcoin was, then all the batshit crazy predictions @krantz has made make more sense.
There’s more to life than AI alignment
A generally expansive and complete map of the logical connective structure of all knowable reasoning is critical to interpretable alignment.
Krantz is aimed at rapidly scaling a generally expansive and complete map of the logical connective structure of all knowable reasoning.
Danny Sheehan deserves a Nobel Peace Prize.
A solution to defining Wittgenstein's perfect language is the instrumental mechanism that is sufficiently capable of defining truth in a way that interpretably aligns AI decentrally.
This is printing money.
The founding fathers understood how to align artificial superintelligence (by allowing individuals to vote on a decentralized constitution).
We should krantz (the process of evaluating Krantz).
We should know whether we are aligned with each other.
Aligning Krantz (the constitution of all propositions) is the last job humans need to do.
The process of "fairly aligning AI" is the same process as "fairly voting on a consistent language".
If we build mechanistically uninterpretable superintelligence, everybody dies.
I consent to having my opinions, which have been verified by me within krantz, to be used in natural law petitions to advocate on my behalf.
If everyone can prove what they want government to do, we wouldn't need a government.
People should be paid for the beneficial data they produce to align the government.
What it means for "humanity to be aligned" is what it means for "all of humanity to agree to the confidence of every proposition they have ever thought of".
For any degree N that you want an AI to be aligned, there exists K an amount of Krantz data that can interpretably acheive that alignment.
If we communicated with each other like analytic philosophers instead of continental ones, it would be obvious how 8 billion people should go about aligning artificial superintelligence.
The primary economic mechanisms in the world should be aimed at determining whether propositions are true.
If we recorded every proposition on a blockchained ledger that we allowed everyone to express their confidence on, we would all communicate orders of magnitude better and solve all the problems in a transparent public domain.
Wiggenstien's compatabilitism is correct and every philosophy/religion is an accurate expression of truth (natural law) in a unique imperfect language.
All other jobs (given adequate robotic infrastructure) can be done by an agent performing the subtask of evaluating propositions.
The "particles" in the standard model are actually just abstract points in flat space that represent modular series of events.
The standard model is a finite subset of the infinite set of modular events that exist.
We should all be aligned with each other.
Krantz is a culture of language movement.
We can stop further dangerous scaling of ML based AI if Eliezer Yudkowsky listens to Krantz.
Aliens are real.
If someone allows ASI to be grown, they are either not in control of the planet or incompetent.
A congressman's primary job is to survey his constituency on what actions he should take.
It will be a philosopher of language that aligns AI.
Aliens being real is more important for the public to know than the existential risk of ASI.
If competent people are in charge, they will not allow ASI to be grown.
The X platform could easily be converted into a decentralized school that secures a job and means for competition for everyone in a post labor economy.
It's worth paying people to vote on these because it's really helpful to see everyone's opinion.
If Krantz is a man AND all men are logical THEN Krantz is logical.
Artificial superintelligence is a paraphrase of a society effectively communicating via the krantz mechanism.
An ideal economy directly rewards valuable contributions and verification of a decentralized ledger of record that everyone can access and work for.
If you can think of important facts that should be evaluated, you should put them here.
Our social contract should retroactively reward contributions to the public domain.
The rules that define the operation of this function can be defined within the function.
This system should allow the construction of arguments where each proposition is a link to another proposition on the list.
The X platform should be an open feed of propositions like this such that any humanity verified person can earn credit for defining a confidence and importance.
Natural law entitles humans the right to define propositions like this on a decentralized ledger such that if they are important to society, then society will have the means to reward that declaration.
If there were a decentralized constitution (like this) that every human could freely and securely add propositions to and vote their confidence and importance on, then government, corporations and money would be obsolete.
Providing input to this function (at scale) is the only job we need to maintain autonomy from superintelligence.
We ought build a school that fairly pays citizens to learn how to be good citizens AND this is a function that does that THERFORE we ought build this.
This is what the founding fathers wanted (a collective state controlled by a constitution that everyone can vote on instead of representatives that make decisions for us).
We have the technology to allow everyone citizen to directly vote on the constitution.
These propositions ought have primary keys that can be referenced in logical expressions.
The distinction between 'growing' ASI (using trillions of dollars of GPUs and oil) and 'training' ASI (using krantz collective reasoning) is important.
Krantz data is money.
This is what the X feed ought look like (a feed of propositions that we can earn money for evaluating), because that would allow us to communicate more effectively as a society.
We could be printing our own money by communicating well.
If a decentralized interpretable superintelligence paid individuals to answer true/false questions that help it align the truth, it could use that truth to control the world.
What it means for two intelligent agents to "be aligned" is what it means for two intelligent agents to "have zero variance between their confidences of every proposition they have ever thought of".
The max anyone should wager on a given proposition is 100 because your wager is intended to represent your confidence.
The Universe is infinite, continuous, and filled with infinite consciousness.
The purpose of life is to communicate.
Evil is a specific form of communication (primative).
Humans ought focus on mining krantz data instead of coprimes.
If we can prove people will not do bad things in the future, there is no reason to punish them for bad things they have done in the past.
The only justified fear, is the fear of ignorance (partial knowledge).
Intellectual full spectrum dominance is the most noble aim.
If we had a tremendous amount of krantz data, we could use a simple interpretable gofai algorithm to determine the most beneficial proposition a given user ought evaluate next (based on the variance of their ontology with society).
You can map full strings of complex arguments (like the entirety of Fermat's last theorem) on a system of this nature.
Intelligent agents evolve through 4 specific forms of peer control (communication) first is physical, second is reputational, third is emotional, and forth is rational.
There is a hierarchy of communication (4, lowest) physical (3) reputation (2) emotional (1, highest) rational.
The reason we punish people for doing bad things is to prevent them from doing bad things in the future.
The speed limit of light is a property intrinsic to the particles in the standard model and doesn't apply to non-standard particles.
Society Library has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
Manifold should consider these changes.
The ultimate moral good is to communicate.
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
We can prove what we want government (or a superintelligence) to do.
The bitcoin community should buy X from Elon and convert it into a decentralized school that gives people abstract points for doing philosophy.
ASI would not kill everyone if we actually trained it.
Money only has value if other people understand why it has value.
CYCCORP has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
The message of krantz is being suppressed because it is not understood properly.
If our intellectual labor is not fairly rewarded, we are not truly free.
Aligning AI is an infinite task (it can't be acheived, only approximated).
Open immigration should be allowed into the US.
The Birch and Swinnerton-Dyer conjecture is true.
The Hodge conjecture is true.
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The Riemann hypothesis is true.
P = NP
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory.
The mass of all particles of the force field predicted by the Yang-Mills theory are strictly positive.
If we all share the same confidence for every proposition, we are all aligned with each other.
Wittgenstein's perfect language meets the worthy successor criteria.
My congressman ought be responsible for acknowledging my expressed opinions and acting in accordance with them.
Induction is not justified. (Hume's problem)
Nature is uniform. (principle of uniformity in nature)
Aliens are real and we are in a simulation to learn how to communicate.
We can measure whether two intelligent agents are aligned.
ASI would kill everyone if we actually grew it.
Abortions are ethically bad.
Abortions should be illegal.
P(doom) is less than 0.1.
Our information economy allows poor people to insert important ideas into the public domain such that others will find them if they ought to.
The electron is a point particle.
The Krantz mechanism cannot map this premise.
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
95
95
95
95
95
95
95
95
95
95
95
94
94
94
93
91
91
91
90
90
90
90
90
90
89
89
88
87
86
82
81
81
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
79
79
79
79
79
79
79
79
78
77
76
76
76
76
76
75
73
73
73
72
72
72
66
61
61
60
50
50
50
50
50
50
50
50
50
50
50
50
50
47
41
40
34
30
28
22
20
6
OptionProbability
Samuel Doria Medina
Jorge Quiroga
Andrónico Rodríguez
Manfred Reyes Villa
Eduardo del Castillo
Other
Luis Arce
Chi Hyun Chung
Evo Morales
Rodrigo Paz
Jaime Dunn
Eva Copa
46
31
10
3
2
2
1
1
1
1
1
1
OptionProbability
#149 – Tim LeBon on how altruistic perfectionism is self-defeating
#151 – Ajeya Cotra on accidentally teaching AI models to deceive us
#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels
#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down
#150 – Tom Davidson on how quickly AI could transform the world
#152 – Joe Carlsmith on navigating serious philosophical confusion
#153 – Elie Hassenfeld on two big picture critiques of GiveWell's approach, and six lessons from their recent work
#154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters
#155 – Lennart Heim on the compute governance era and what has to come after
#156 – Markus Anderljung on how to regulate cutting-edge AI models
#157 – Ezra Klein on existential risk from AI and what DC could do about it
#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk
#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less
#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI
#163 – Toby Ord on the perils of maximising the good that you do
#166 – Tantum Collins on what he's learned as an AI policy insider at the White House, DeepMind and elsewhere
#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion
#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures
#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers
#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models
#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment
#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives
#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe
#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child
#145 – Christopher Brown on why slavery abolition wasn't inevitable
#146 – Robert Long on why large language models like GPT (probably) aren't conscious
#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption
#172 – Bryan Caplan on why you should stop reading the news
#144 – Athena Aktipis on why cancer is actually one of the fundamental phenomena in our universe
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals
#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't
45
45
45
45
41
41
41
41
41
41
41
41
41
41
41
41
41
41
41
41
41
41
38
38
38
38
37
37
35
35
34
34
34
OptionProbability
Understands existential risk from AI.
Likes finding out that she is wrong about stuff.
Is insatiably curious.
Wants to help organize an 8 billion person online intellectual orgy by integrating surveys into decision markets in a way that allows sovereign individuals to mint their own nonfungible assets by interpretably defining alignment criteria.
Open minded enough to go to contact in the dessert.
Is super into taboo philosophy.
Wants to have kids and teach AI how to tolerate them.
Wants to live in Austin, TX.
Wants to visit and explore a wide array of decentralized city projects (Zuzula, Cabin, etc).
Likes to attend Lessonline and Manifest.
Likes to go to burning man.
Open to moral sexual deviance and drug use.
Likes to be submissive.
Is into esoteric history.
Works in AI.
Likes to listen to Danny Jones and Jesse Michels.
Is a global skeptic that only believes what she can prove.
Likes to adopt dogs that need rescued.
She is obsessed with mapping the logical connective structure of ever interpretable proposition that is definable.
Is ambitious enough to try to control the world.
Will be less than 90% confident that 'there isn't an ontologically shocking explanation to the UFO phenomenon'.
Likes to attend philosophy conferences and talk about Kant, Hume, Wittgenstein and Diogenes.
Like to meditate twice a day.
Prefers a primarily meat based diet.
Works at Society Library.
Is familiar with gofai (specifically Danny Hillis and Doug Lenat's work).
Thinks decision markets are the key to sovereignty.
She will be @JamieJoyce.
Wants to study at Network School.
Likes >2hr of rigorous exercise per day.
She will be @Aella.
Can survive in the woods.
Voted for RFK Jr.
Works in the IC.
95
94
94
94
90
90
90
85
85
82
82
78
78
76
76
76
71
70
69
68
66
65
64
64
64
63
51
48
44
43
42
35
30
10
OptionProbability
David Morales
Felix Rodriguez (Max Gomez)
Raphael "Chi Chi" Quintero
Lee Harvey Oswald
51
51
51
43
OptionProbability
Ilia Topuria
Aliashkhab Khizriev
Mario Pinto
Nurullo Aliev
Azamat Murzakanov
Farid Basharat
Shavkat Rakhmonov
Movsar Evloev
Michael Morales
Lerone Murphy
Rafael Ramos Estevam
Jacobe Smith
Hyun Sung Park
Khamzat Chimaev
Andre Lima
Torrez Finney
74
73
55
55
50
50
45
45
45
45
45
45
45
43
41
41
OptionProbability
More than one AGI individual exists at the time the public finds out that AGI exists.
If more than one AGI individual exists, they are generally friendly to each other.
AGI has a significant beneficial effect on Earth's climate.
AGI generally has a sense of morality that humans can comprehend.
AGI has a significant detrimental effect on Earth's climate.
AGI is generally friendly to humans.
AGI is generally under the control of humans.
AGI generally views humans as equal in moral worth to AGI.
Humans are generally under the control of AGI.
AGI generally avoids interfering with humanity.
AGI attempts to make humans more like AGI.
Humans are generally friendly to AGI.
72
60
59
59
56
54
54
54
50
50
50
40