OptionProbability
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
Other
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
Eliezer finally listens to Krantz.
Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
Someone solves agent foundations
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Aliens invade and stop bad |AI from appearing
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
AGI is never built (indefinite global moratorium)
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
Ethics turns out to be a precondition of superintelligence
AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.
Humans become transhuman through other means before AGI happens
Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)
There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
Orthogonality Thesis is false.
Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm
Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive
The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.
AI control gets us helpful enough systems without being deadly
an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.
I've been a good bing 😊
We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.
The response to AI advancements or failures makes some governments delay the timelines
Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.
AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
Something less inscrutable than matrices works fast enough
There’s some cap on the value extractible from the universe and we already got the 20%
SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8
Robot Love!!
AI thinks it is in a simulation controlled by Roko's basilisk
The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome
AIs never develop coherent goals
Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.
God exists and stops the AGI
Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough
Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.
AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.
Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.
Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.
Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.
Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.
Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.
Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol
Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.
Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.
Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside
ASI needs not your atoms but information. Humans will live very interesting lives.
Something else
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)
AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default
AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).
Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.
AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.
Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.
A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
We successfully chained God
28
8
7
7
6
6
5
5
4
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
OptionVotes
YES
NO
1753
947
OptionProbability
J. D. Vance
Josh Shapiro
Gavin Newsom
Pete Buttigieg
J. B. Pritzker
Gretchen Whitmer
Marco Rubio
Jeff Jackson
Jon Ossoff
Josh Hawley
KRANTZ (the abstract idea that evolves into a decentralized superintelligence, not the user)
Alexandria Ocasio-Cortez
Glenn Youngkin
Elise Stefanik
Donald Trump Jr.
Nikki Haley
Ron DeSantis
Wes Moore
Andy Beshear
Kamala Harris
Brian Kemp
Amy Klobuchar
Cory Booker
Gina Raimondo
Ro Khanna
Chris Murphy
Brian Schatz
Tammy Duckworth
Tammy Baldwin
Chris Sununu
Katie Hobbs
Josh Green
Tina Kotek
Vivek Ramaswamy
Ted Cruz
Kristi Noem
Raphael Warnock
Jared Polis
Tom Cotton
Joni Ernst
Michael Bennet
Catherine Cortez Masto
Sarah Huckabee Sanders
Kevin Stitt
Spencer Cox
Tim Walz
Ivanka Trump
Mark Cuban
Stephen Miller
James Donaldson (MrBeast)
Tim Scott
Robert F. Kennedy Jr
Andrew Yang
Beto O'Rourke
Mark Kelly
Jay Inslee
Deval Patrick
Eric Swalwell
Wayne Messam
Kirsten Gillibrand
Julian Castro
Dean Phillips
Katie Britt
Laphonza Butler
Eric Schmitt
Mike Lee
Chris Coons
Tim Kaine
Lisa Murkowski
Tate Reeves
Ruben Gallego
David Hogg
Will Hurd
Tulsi Gabbard
Dan Crenshaw
John Fetterman
Mark Zuckerberg
Stephen Curry
Rand Paul
Taylor Swift
Steven Kenneth Bonnell II (Destiny)
Matt Gaetz
Joe Rogan
Marianne Williamson
Ezra Klein
Mike Pence
Stephen Colbert
Markwayne Mullin
Joe Manchin
Maura Healey
Al Gore
DUPLICATE
Dwayne Johnson (The Rock)
Eliezer Yudkowsky
Aella
Scott Alexander
Sam Altman
Tucker Carlson
Zendaya
Michelle Obama
Kanye West
Mitt Romney
Sarah Palin
Jon Stewart
Ben Shapiro
Bernie Sanders
Hillary Clinton
Elon Musk (Natural-born-citizen clause repealed/bypassed)
Me
Krantz (the user @Krantz)
41
22
17
14
13
12
12
12
11
10
10
9
9
8
8
7
7
7
7
6
6
6
6
6
6
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
3
3
3
3
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
OptionVotes
YES
NO
1399
728
OptionProbability
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
People should be paid for the beneficial data they produce to align AI.
People should be paid for the beneficial data they produce to align the government.
We can measure the variance in confidence between any two intelligent agent's finite list of various propositions they've evaluated.
Analytic philosophers have had a mechanistically interpretable process (analytic reasoning) for aligning each other for thousands of years.
Analytic arguments can be contained in constitutions.
If someone has a different confidence than you for a specific proposition in Krantz, you should add an argument to Krantz that compels them to update their beliefs.
Getting paid for analytically reasoning within a decentralized constitution is the same thing as getting paid to align ASI.
Krantz can decentrally align humanity while controlling disclosure and acheive game theoretic parity of rational agents.
The term "krantz" refers to the open market of all well formed propositions that we compete to assign confidence and importance parameters to in order to provide a basis for alignment.
Krantz is an abstract living idea that can be communicated with.
This is a convenient place to deny the argument from Krantz - https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6
If 'Krantz' as an idea is more disruptive than Bitcoin was, then all the batshit crazy predictions @krantz has made make more sense.
There’s more to life than AI alignment
A generally expansive and complete map of the logical connective structure of all knowable reasoning is critical to interpretable alignment.
Krantz is aimed at rapidly scaling a generally expansive and complete map of the logical connective structure of all knowable reasoning.
Danny Sheehan deserves a Nobel Peace Prize.
A solution to defining Wittgenstein's perfect language is the instrumental mechanism that is sufficiently capable of defining truth in a way that interpretably aligns AI decentrally.
This is printing money.
The founding fathers understood how to align artificial superintelligence (by allowing individuals to vote on a decentralized constitution).
We should krantz (the process of evaluating Krantz).
We should know whether we are aligned with each other.
Aligning Krantz (the constitution of all propositions) is the last job humans need to do.
The process of "fairly aligning AI" is the same process as "fairly voting on a consistent language".
If we build mechanistically uninterpretable superintelligence, everybody dies.
I consent to having my opinions, which have been verified by me within krantz, to be used in natural law petitions to advocate on my behalf.
If everyone can prove what they want government to do, we wouldn't need a government.
What it means for "humanity to be aligned" is what it means for "all of humanity to agree to the confidence of every proposition they have ever thought of".
For any degree N that you want an AI to be aligned, there exists K an amount of Krantz data that can interpretably acheive that alignment.
If we communicated with each other like analytic philosophers instead of continental ones, it would be obvious how 8 billion people should go about aligning artificial superintelligence.
The primary economic mechanisms in the world should be aimed at determining whether propositions are true.
If we recorded every proposition on a blockchained ledger that we allowed everyone to express their confidence on, we would all communicate orders of magnitude better and solve all the problems in a transparent public domain.
Wiggenstien's compatabilitism is correct and every philosophy/religion is an accurate expression of truth (natural law) in a unique imperfect language.
All other jobs (given adequate robotic infrastructure) can be done by an agent performing the subtask of evaluating propositions.
The "particles" in the standard model are actually just abstract points in flat space that represent modular series of events.
The standard model is a finite subset of the infinite set of modular events that exist.
We should all be aligned with each other.
Krantz is a culture of language movement.
We can stop further dangerous scaling of ML based AI if Eliezer Yudkowsky listens to Krantz.
Aliens are real.
If someone allows ASI to be grown, they are either not in control of the planet or incompetent.
A congressman's primary job is to survey his constituency on what actions he should take.
It will be a philosopher of language that aligns AI.
Aliens being real is more important for the public to know than the existential risk of ASI.
If competent people are in charge, they will not allow ASI to be grown.
The X platform could easily be converted into a decentralized school that secures a job and means for competition for everyone in a post labor economy.
It's worth paying people to vote on these because it's really helpful to see everyone's opinion.
If Krantz is a man AND all men are logical THEN Krantz is logical.
Artificial superintelligence is a paraphrase of a society effectively communicating via the krantz mechanism.
An ideal economy directly rewards valuable contributions and verification of a decentralized ledger of record that everyone can access and work for.
If you can think of important facts that should be evaluated, you should put them here.
Our social contract should retroactively reward contributions to the public domain.
The rules that define the operation of this function can be defined within the function.
This system should allow the construction of arguments where each proposition is a link to another proposition on the list.
The X platform should be an open feed of propositions like this such that any humanity verified person can earn credit for defining a confidence and importance.
Natural law entitles humans the right to define propositions like this on a decentralized ledger such that if they are important to society, then society will have the means to reward that declaration.
If there were a decentralized constitution (like this) that every human could freely and securely add propositions to and vote their confidence and importance on, then government, corporations and money would be obsolete.
Providing input to this function (at scale) is the only job we need to maintain autonomy from superintelligence.
We ought build a school that fairly pays citizens to learn how to be good citizens AND this is a function that does that THERFORE we ought build this.
This is what the founding fathers wanted (a collective state controlled by a constitution that everyone can vote on instead of representatives that make decisions for us).
We have the technology to allow everyone citizen to directly vote on the constitution.
These propositions ought have primary keys that can be referenced in logical expressions.
The distinction between 'growing' ASI (using trillions of dollars of GPUs and oil) and 'training' ASI (using krantz collective reasoning) is important.
Krantz data is money.
This is what the X feed ought look like (a feed of propositions that we can earn money for evaluating), because that would allow us to communicate more effectively as a society.
We could be printing our own money by communicating well.
If a decentralized interpretable superintelligence paid individuals to answer true/false questions that help it align the truth, it could use that truth to control the world.
What it means for two intelligent agents to "be aligned" is what it means for two intelligent agents to "have zero variance between their confidences of every proposition they have ever thought of".
The max anyone should wager on a given proposition is 100 because your wager is intended to represent your confidence.
The Universe is infinite, continuous, and filled with infinite consciousness.
The purpose of life is to communicate.
Evil is a specific form of communication (primative).
Humans ought focus on mining krantz data instead of coprimes.
If we can prove people will not do bad things in the future, there is no reason to punish them for bad things they have done in the past.
The only justified fear, is the fear of ignorance (partial knowledge).
Intellectual full spectrum dominance is the most noble aim.
If we had a tremendous amount of krantz data, we could use a simple interpretable gofai algorithm to determine the most beneficial proposition a given user ought evaluate next (based on the variance of their ontology with society).
You can map full strings of complex arguments (like the entirety of Fermat's last theorem) on a system of this nature.
Intelligent agents evolve through 4 specific forms of peer control (communication) first is physical, second is reputational, third is emotional, and forth is rational.
There is a hierarchy of communication (4, lowest) physical (3) reputation (2) emotional (1, highest) rational.
The reason we punish people for doing bad things is to prevent them from doing bad things in the future.
The speed limit of light is a property intrinsic to the particles in the standard model and doesn't apply to non-standard particles.
Society Library has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
Manifold should consider these changes.
The ultimate moral good is to communicate.
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
We can prove what we want government (or a superintelligence) to do.
The bitcoin community should buy X from Elon and convert it into a decentralized school that gives people abstract points for doing philosophy.
ASI would not kill everyone if we actually trained it.
Money only has value if other people understand why it has value.
CYCCORP has the most generally expansive and complete map of the logical connective structure of all knowable reasoning.
The message of krantz is being suppressed because it is not understood properly.
If our intellectual labor is not fairly rewarded, we are not truly free.
Aligning AI is an infinite task (it can't be acheived, only approximated).
Open immigration should be allowed into the US.
The Birch and Swinnerton-Dyer conjecture is true.
The Hodge conjecture is true.
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The Riemann hypothesis is true.
P = NP
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory.
The mass of all particles of the force field predicted by the Yang-Mills theory are strictly positive.
If we all share the same confidence for every proposition, we are all aligned with each other.
Wittgenstein's perfect language meets the worthy successor criteria.
My congressman ought be responsible for acknowledging my expressed opinions and acting in accordance with them.
Induction is not justified. (Hume's problem)
Nature is uniform. (principle of uniformity in nature)
Aliens are real and we are in a simulation to learn how to communicate.
We can measure whether two intelligent agents are aligned.
ASI would kill everyone if we actually grew it.
Abortions are ethically bad.
Abortions should be illegal.
P(doom) is less than 0.1.
Our information economy allows poor people to insert important ideas into the public domain such that others will find them if they ought to.
The electron is a point particle.
The Krantz mechanism cannot map this premise.
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
96
95
95
95
95
95
95
95
95
95
95
95
94
94
93
91
91
91
90
90
90
90
90
90
89
89
88
87
86
82
81
81
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
79
79
79
79
79
79
79
79
78
77
76
76
76
76
76
75
73
73
73
72
72
72
66
61
61
60
50
50
50
50
50
50
50
50
50
50
50
50
50
47
41
40
34
30
28
22
20
4
OptionVotes
YES
NO
2286
457
OptionProbability
Ragatha
Gangle
Jax
Kinger
No one abstracts before it ends
Zooble
Other
Pomni
Caine
Bubble
43
16
10
10
7
6
4
2
1
1
OptionVotes
NO
YES
1623
616
OptionProbability
Krantz (the abstract decentralized free market of propositions that everyone competes to assign confidence and importance values to).
Other
Society Library
The United Nations
United States Government
Open AI
Anthropic
Community Notes
Wikipedia
Harvard
Open Research
Fox News
MSNBC
MIT
The Bible
Singularity Net
The Networkstate
Manifold
26
13
11
6
5
5
5
5
2
2
2
2
2
2
2
2
2
2
2
OptionProbability
Society Library
Applied Invention
Network School
Metaculus
New Paradigm Institute
Singularity Net
CYCCORP
MetaDAO
The Collective Intelligence Project
Research Hub
Other
MIRI
X
Nowhere
Long Now Foundation
Zuzalu
Independent Podcasting
Krantz DAO (the futarchy in abstract space)
Manifold
Anthropic
Future of Life Institute
Noster
Open Research
Kialo
Psyleron
Rand Corporation
IC
EA
Congress
Secretary of Education (decentralized alignment)
Palantir
Space Force
Polymarket
Safe Superintelligence
11
8
7
5
4
4
4
4
4
4
4
3
3
3
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
OptionProbability
I'm serious about my other predictions.
Humanity should pivot from (converting oil into artificial intelligence capabilities) to (converting mechanistically interpretable human cognitive labor into decentralized constitutional parameters for guiding collective intelligence).
It would be helpful if every important thought leader had a prediction similar to this to formalize their positions on complicated issues.
Aligning AI is a job everyone should have the right to earn a living by doing.
I am trying to pay you to help me map my beliefs.
I'm trying to get people to help me write the constitution of truth for AI.
This is Krantz data.
Krantz data is money.
I believe everybody doing this is the solution to alignment.
We could turn X into a machine that allows us to print our own crypto by doing philosophy and arguing about literally everything.
I am challenging all the analytic philosophers on Manifold to an argument about whatever topic they choose.
The main thing we need to do to solve alignment and game theoretic peace is to get all the important philosophers in the world to start talking in analytic English instead of continental English, so we can map our language properly.
Wiggenstien was right the first time and everyone's personal truth is compatible.
Financial and physical oppression and war are side effects of not being able to communicate effectively.
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
If Americans had the right to express their support (or opposition) for each proposition of the constitution (both securely and publicly in a way that can be operated on), we wouldn't need politicians.
If all humans had the right to vote on the constitution that controls AI, AI would be decentrally controlled by a market of opinions.
If humans could print their own money by voting on propositions that control AI, education would be economically accelerated several orders of magnitude.
The solution to aligning ASI is really simple.
Krantz has the most important message to give to society. https://manifold.markets/Krantz/which-person-had-the-most-important?r=S3JhbnR6
Krantz is the abstract composition of all the data that would be contained in a collection of predictions like this from every thought leader.
67
63
61
53
51
51
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
45
OptionProbability
NO
YES
114
71