DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

Cleverly News

    Prediction markets for Cleverly

    OptionProbability

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    18

    16

    12

    8

    8

    6

    5

    5

    5

    5

    5

    4

    1

    1

    0

    0

    0

    0

    OptionProbability

    Keir Starmer

    Nigel Farage

    Other

    Robert Jenrick

    Kemi Badenoch

    Angela Rayner

    James Cleverly

    Jeremy Clarkson

    Jeremy Corbyn

    Boris Johnson

    Richard Tice

    Rupert Lowe

    Ed Davey

    Idris Elba

    34

    26

    19

    6

    5

    4

    3

    1

    1

    1

    0

    0

    0

    0

    Who will be the next Conservative Party Leader?

    Nov 2, 11:29 AMDec 31, 11:59 PM
    2014954

    OptionProbability

    Robert Jenrick

    Tom Tugendhat

    James Cleverly

    Other

    Boris Johnson

    Mel Stride

    Priti Patel

    Chris Phelps

    Jeremy Hunt

    Suella Braverman

    Grant Schapps

    Andy Street

    Rishi Sunak

    Liz Truss

    30

    25

    19

    10

    4

    3

    2

    2

    1

    1

    1

    1

    1

    0

    OptionProbability

    Keir Starmer

    Angela Rayner

    Nigel Farage

    Wes Streeting

    Kemi Badenoch

    Rachel Reeves

    Oliver Dowden

    Yvette Cooper

    Bridget Phillipson

    Jonathan Reynolds

    John Healey

    Jeremy Hunt

    Tom Tugendhat

    Johnny Mercer

    Jess Phillips

    Daisy Cooper

    Mark Harper

    Victoria Prentis

    Shabana Mahmood

    Louise Haigh

    Dan Jarvis

    Angela Eagle

    Suella Braverman

    Mel Stride

    David Lammy

    Pat McFadden

    Peter Kyle

    Ian Murray

    Lisa Nandy

    Emily Thornberry

    Priti Patel

    James Cleverly

    Liz Kendall

    Steve Reed

    Hilary Benn

    Rebecca Long-Bailey

    Ian Lavery

    Michael Gove

    Penny Mordaunt

    Gillian Keegan

    Ed Milliband

    Jo Stevens

    Clive Lewis

    Barry Gardiner

    David Miliband

    John McDonnell

    Stephen Kinnock

    Ed Davey

    Jacob Rees-Mogg

    Thangam Debbonaire

    Diane Abbott

    Kwasi Kwarteng

    Jude Bellingham

    JK Rowling

    David Tennant

    Dominic Cummings

    William, Prince of Wales

    Piers Morgan

    Jimmy Carter

    100

    30

    30

    23

    21

    21

    19

    19

    19

    16

    12

    11

    11

    11

    11

    11

    10

    10

    10

    10

    10

    10

    9

    9

    9

    9

    9

    9

    9

    9

    9

    8

    8

    8

    8

    8

    7

    6

    6

    6

    6

    6

    6

    6

    6

    5

    5

    5

    5

    4

    4

    4

    4

    3

    1

    1

    1

    1

    0

    OptionProbability

    Nigel Farage

    Angela Rayner

    Robert Jenrick

    Other

    Kemi Badenoch

    James Cleverly

    Wes Streeting

    Yvette Cooper

    Andy Burnham

    Jeremy Corbyn

    Rachel Reeves

    David Lammy

    Ed Davey

    Tom Tugendhat

    Priti Patel

    Mel Stride

    Jeremy Hunt

    Rishi Sunak

    Suella Braverman

    Lisa Nandy

    28

    19

    11

    11

    8

    6

    3

    2

    2

    2

    1

    1

    1

    1

    1

    1

    0

    0

    0

    0

    OptionProbability

    Nigel Farage

    Angela Rayner

    Other

    Robert Jenrick

    Ed Davey

    Kemi Badenoch

    James Cleverly

    Yvette Cooper

    Wes Streeting

    Keir Starmer

    Rachel Reeves

    Jeremy Hunt

    Rishi Sunak

    Ben Wallace

    Rosena Allin-Khan

    Penny Mordaunt

    Suella Braverman

    Rebecca Long-Bailey

    Lisa Nandy

    Someone Else

    Emily Thornberry

    Jess Phillips

    Clive Lewis

    Diane Abbott

    John McDonnell

    David Lammy

    Dan Jarvis

    Barry Gardiner

    Ian Lavery

    Michael Gove

    Sajid Javid

    Brandon Lewis

    Grant Shapps

    Tom Tugendhat

    Boris Johnson

    Theresa May

    Andrea Leadsom

    Stephen Crabb

    Liam Fox

    Nadhim Zahawi

    Priti Patel

    Angela Eagle

    Hilary Benn

    Jacob Rees-Mogg

    Stephen Kinnock

    Michelle Donelan

    Johnny Mercer

    Niko Omilana

    Andy Burnham

    24

    20

    19

    10

    8

    7

    5

    2

    2

    1

    1

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    OptionProbability

    Robert Jenrick

    Other

    James Cleverly

    Tom Tugendhat

    Boris Johnson

    Rishi Sunak

    Suella Braverman

    39

    26

    16

    8

    6

    3

    2

    Who will be the next Conservative Prime Minister?

    Feb 24, 3:18 AMDec 30, 11:59 PM
    353839

    OptionProbability

    Other

    Robert Jenrick

    Kemi Badenoch

    James Cleverly

    Nigel Farage

    Suella Braverman

    Penny Mordaunt

    David Cameron

    Priti Patel

    Rory Stewart

    42

    29

    11

    7

    6

    2

    1

    1

    1

    0

    OptionProbability

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    20

    12

    10

    8

    8

    7

    6

    5

    5

    5

    3

    3

    3

    2

    1

    1

    1

    1

    OptionProbability

    Robert Jenrick

    Kemi Badenoch

    Other

    James Cleverly

    Nigel Farage

    39

    24

    18

    17

    3

    OptionProbability

    Robert Jenrick

    Nigel Farage

    Boris Johnson

    Tom Tugendhat

    Kemi Badenoch

    James Cleverly

    Priti Patel

    Claire Coutinho

    Chris Philip

    Laura Trott

    57

    35

    35

    30

    22

    22

    12

    10

    10

    10

Latest stories

Join our channels

Get the latest stories live on any device.

  • WhatsApp

    Top Stories

    Join
  • Telegram

    Top Stories

    Join