DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

OP News

    Prediction markets for OP

    OptionProbability

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    17

    13

    13

    12

    11

    10

    8

    8

    3

    1

    1

    1

    1

    1

    0

    0

    0

    0

    Will there be a unicorn founded and operated by just one person by 2030?

    Feb 2, 2:06 PMJan 2, 4:59 AM
    49.28%chance
    215110789

    OptionVotes

    NO

    YES

    13048

    3140

    Will Anthropic surpass OpenAI valuation in 2026?

    Jan 4, 5:08 AMDec 31, 11:59 PM
    42.19%chance
    18094791

    OptionVotes

    NO

    YES

    3014

    2926

    Anthropic flips OpenAI before 2028?

    Sep 5, 1:30 AMDec 31, 11:59 PM
    50.1%chance
    10658046

    OptionVotes

    NO

    YES

    10027

    9987

    OptionVotes

    YES

    NO

    12280

    8032

    OptionVotes

    YES

    NO

    13939

    7262

    OptionVotes

    YES

    NO

    13505

    8680

    Will Anthropic have a higher market cap than OpenAI after both IPO?

    Jan 31, 9:12 PMDec 31, 11:59 PM
    35.69%chance
    8616597

    OptionVotes

    YES

    NO

    12850

    7975

    OptionVotes

    YES

    NO

    12015

    8196

    OptionVotes

    NO

    YES

    10538

    9489

    OptionProbability

    Helped with >80% confidence

    Helped with >60% confidence

    Uncertain

    Harmed with >60% confidence

    Harmed with >80% confidence

    66

    22

    7

    5

    1

Latest stories

Latest stories

Join our channels

Get the latest stories live on any device.

  • WhatsApp

    Top Stories

    Join
  • Telegram

    Top Stories

    Join