We could map the entirety of analytic philosophy using this question.
The list of answers to this question form a constitution of truth that can be aligned decentrally by a free market.
A network state could use a predictive market constitution to define its smart social contracts.
Large scale AI systems should have intrinsic guardrail behaviors that no one actor can override.
This prediction is effectively the same as this one and nobody can explain why this mechanism isn't able to align AI at scale. https://manifold.markets/Krantz/krantz-mechanism-demonstration
Betting on philosophy seems like a fun way to (1) learn philosophy (2) contribute to a transhumanist utopia world where our net incomes are highly correlated with how much beneficial stuff we taught the public domain AI.
Constitutions play a critical role in the frontier methods for aligning AI.
A good AI should not kill people.
There would be a dramatic positive change in the world if teenagers and homeless folks could earn crypto on an app they download for free to argue philosophy with an AI until they can either prove the AI is right or prove it is wrong.
Philosophy is primarily the pursuit of defining language.
A duty to reason is the foundation for goodness.
I have free will.
A good AI should not infringe on the rights, autonomy or property of humans.
We should stop doing massive training runs.
The evolutionary environment contains perverse incentives that have led to substantial false consciousness in humans
Induction is not justified.
A good AI, by design, requires large scale human participation to grow.
A good AI requires large scale humanity verification before it accepts new data as true.
The principle of uniformity in nature is self evident or justified by self evident facts.