DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

Thing 13 News

    Prediction markets for Thing 13

    OptionProbability

    None. This argument is sound and Eliezer will be compelled to look at Krantz's work.

    12. 10 and 11 imply that it is possible to incentivize people to understand the alignment problem.

    10. It is possible to build a machine that pays individuals for demonstrating they’ve understood something.

    11. If individuals can see that they will earn a substantial cash reward for demonstrating they understand something, they will be incentivized to demonstrate they understand it.

    8. People learn the things that they have obvious incentives to learn.

    17. 5, 12, 14, 15 and 16 imply that if your goal is to prevent the scaling of dangerous AI, then you should review the work of Krantz.

    18. If AI safety orgs understood there was an effective function that converts capital into public awareness of existential risk from AI, then they would supply that function with capital.

    1. If AI develops the capability to control the environment better than humans, then humanity is doomed.

    2. If we continue to scale AI capabilities, then it will eventually be able to control the environment better than humans.

    3. 1 and 2 imply that if we continue to scale AI capabilities, then humanity is doomed.

    4. We should not be doomed.

    5. 3 and 4 imply that we should stop scaling AI.

    6. If every person on the planet understood the alignment problem as well as Eliezer Yudkowsky, then we would not scale AI to the point where it can control the environment better than humans.

    7. People only understand the things they have learned.

    9. 6, 7, and 8 imply that if people had sufficient and obvious incentives to understand the alignment problem, then we would not scale AI to the point where it can control the environment better than humans.

    13. If a majority of people understood the actual risks posed by scaling AI, then they would vote for representatives that support legislature that prevents the scaling of AI.

    14. 9 and 13 imply that if we sufficiently incentivize the understanding of the alignment problem, then people would take action to prevent dangerous AI scaling.

    15. If your goal is to prevent the scaling of dangerous AI, then you should be working on building mechanisms that incentivize awareness of the issue. (from 14)

    16. Krantz's work is aimed at building a mechanism that incentivizes the demonstration of knowledge.

    19. 17 and 18 imply that Eliezer Yudkowsky and other safety organizations should review the Krantz system to help prevent doom.

    78

    7

    5

    3

    2

    2

    2

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    OptionProbability

    2026

    2025

    2027

    2028

    2029

    33

    23

    17

    14

    14

    OptionVotes

    NO

    YES

    337

    182

    Will California repeal Prop. 13 by 2030? (2% adjustment cap)

    Jul 31, 5:39 AMDec 31, 11:59 PM
    25.18%chance
    9164

    OptionVotes

    YES

    NO

    291

    93

    Will California repeal Prop. 13 by 2030? (1% limit)

    Jul 31, 5:40 AMDec 31, 11:59 PM
    25%chance
    6151

    OptionVotes

    YES

    NO

    233

    59

    Will California repeal Prop. 13 by 2030? (2/3's majority requirement)

    Jul 31, 5:42 AMDec 31, 11:59 PM
    31.52%chance
    9110

    OptionVotes

    YES

    NO

    222

    100

    🦝RISK Payment Portal

    Apr 23, 9:38 PM
    00

Latest stories

Join our channels

Get the latest stories live on any device.

  • WhatsApp

    Top Stories

    Join
  • Telegram

    Top Stories

    Join