DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

Paradigm News

    Prediction markets for Paradigm

    OptionProbability

    A set of radical beliefs that do not fit on a strict left/right paradigm

    Wrongful delay or deny insurance claims

    An attempt to terrorize people into bringing down the system of private insurance in the United States

    An attempt to change the incentive structure present at the CEO level of UHC and similar companies

    Chronic Back Pain

    Mental illness

    Fame

    Radical Leftism

    Assassin suffered a personal tragedy at the hands of UnitedHealthcare

    Zizian Murder Cult member

    Business

    Manipulation of the share price of UnitedHealth Inc for profit.

    Genetic predisposition (Italian)

    Targeted hit/contract killing

    brainwashed right wing anti-vax lunatic

    Personal conflict with CEO

    Random act of violence

    85

    78

    60

    38

    36

    31

    18

    15

    14

    4

    3

    3

    3

    2

    2

    1

    1

    OptionProbability

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    16

    14

    12

    10

    8

    8

    7

    6

    5

    5

    5

    2

    1

    1

    0

    0

    0

    0

    OptionVotes

    NO

    YES

    1368

    389

    OptionProbability

    Coding agents will close the loop

    Coding agents will flip the initiative

    “Live learning” will be standard

    The multi-agent paradigm will win

    The specific model will not matter as much as today; the network of agents will be important

    Recursively improving coding agents will succeed in the market

    xAI will gain a sizable lead in model quality

    79

    72

    67

    63

    56

    49

    30

    OptionProbability

    Brakes hit

    Capability limit in AI

    Huge alignment effort

    New AI paradigm

    Slow and gradual capability gains in AI

    Enhancing human minds and/or society

    Major capability limit in AI

    Non-AI tech

    Alignment relatively easy

    Brakes not hit

    Alignment unnecessary

    Well-behaved AI with bad ends

    Alignment extra hard

    56

    50

    49

    49

    48

    44

    42

    38

    35

    28

    25

    19

    1

    Will the first AGI be built mostly within the deep learning paradigm?

    May 21, 6:31 PMMay 28, 9:59 PM
    80.31%chance
    253656

    OptionVotes

    NO

    YES

    2020

    495

    Will we get a new LLM paradigm by EOY?

    Jan 26, 2:30 AMDec 31, 10:59 AM
    35.26%chance
    243145

    OptionVotes

    YES

    NO

    1355

    738

    Are autoregressive LLMs a doomed paradigm? (Inspired by LeCun's tweet)

    Apr 9, 6:05 PMDec 31, 10:59 PM
    42.96%chance
    291950

    OptionVotes

    YES

    NO

    1239

    762

    OptionVotes

    NO

    YES

    1711

    901

    OptionProbability

    Q1/Q2 '26

    Q3/Q4 '25

    Q3/Q4 '26

    Q1/Q2 '27

    Q1/Q2 '25

    Q3/Q4 '27

    2028 or later

    24

    23

    18

    14

    10

    7

    5

    OptionVotes

    YES

    NO

    208

    144

Latest stories