DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

Part One News

    Prediction markets for Part One

    OptionProbability

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    18

    16

    15

    11

    8

    5

    5

    5

    5

    3

    3

    2

    1

    1

    0

    0

    0

    0

    If China invades Taiwan, will they succeed?

    Nov 12, 2:34 AMJan 1, 5:00 AM
    58.2%chance
    26785991

    OptionVotes

    NO

    YES

    2561

    1198

    OptionProbability

    trump is impeached by either house OR senate

    Bitcoin reaches 200K usd or more

    a second cybertruck explodes (intended or unintended) that makes the news

    Tom Scott's 'this video' reaches 80M views on Youtube

    this market reaches 100 traders

    undersea cables reported cut around taiwan

    EOD Boxing Day - Dec 26

    Ark Survival Evolved 2 releases

    English Wikipedia reaches 70M PAGES or more

    israel opens an embassy in syria, OR announces it will

    Saw XI Releases in USA

    alan greenspan passes away

    coup in an african country

    noam chomsky passes away

    chinese spy balloon incident reported on news

    openai loses another board member, or sam altman no longer ceo

    another trump assassination attempt

    this market reaches 5k individual TRADES

    discord IPO happens

    2025 nobel peace prize winner announced

    Hades 3 announced (game)

    EOD Thanksgiving - Nov 28

    Last game of the MLB World Series ends

    the "500 poll" reaches its target goal of 500 responses

    Taylor Swift announce engagement or marriage

    zootopia 2 releases

    Spacex launches 150th rocket of the year

    EOD Halloween - Oct 31

    First game of the MLB World Series starts

    manifold raises more money

    EOD Lief Erikson Day - Oct 9

    MLB rookie of the year announced

    Tom Scott's 'this video' reaches 75M views on Youtube

    Bitcoin reaches 150K usd or more

    stripe ipo happens

    Legally Blonde 3 release date announced

    GenoSamuel releases Chris Chan History #86

    Twitter releases a Peer to Peer payment system to free or premium users

    Prong.Studio releases a 3rd product (not an accessory or part for an existing one)

    Cy Young award winner announced

    Third dune movie officially announced

    trump removes a cabinet member

    windows 12 announcement is made

    Skibidi Toilet ends their original series

    Imu face reveal in One Piece manga

    the third Atlantic hurricane of the season

    someone reaches 100k traders on creator leaderboard

    onepieceexplained reaches 15k subs on youtube

    trump starts mass deportations

    Spacex launches 100th rocket in one year

    Skate 4 releases

    the second Atlantic hurricane of the season

    Sailing releases as a skill in Old School Runescape

    Bitcoin reaches 125K usd or more

    the first Atlantic hurricane of the season

    Chat GPT 5 releases to the general userbase

    Spider-Man: Beyond the Spider-Verse release date announced

    Earthquake magnitude 8.0 or higher somewhere in the world

    Earthquake magnitude 7.8 or higher somewhere in the world

    Killing Floor 3 releases on Steam

    grok4 release date

    the start of Amazon Prime Day(s) 2025

    EOD Fourth of July - Jul 4

    the third Pacific hurricane of the season

    28 years Later releases in USA

    the second Pacific hurricane of the season

    chime IPO happens

    Spacex launches 75th rocket of the year

    First Apple Event of the year

    the first Pacific hurricane of the season

    manifest 2025 ends

    Manifest 2025 starts

    Mr Beast hits 400M Youtube Subscribers

    Bitcoin reaches 110K usd or more

    English Wikipedia reaches 7M ARTICLES or more

    claude 4 sonnet releases (or later version)

    EOD Cinco De Mayo - May 5

    Spacex launches 50th rocket of the year

    Last day of the NFL draft

    Llama 4 released to the general userbase

    Joseph Anderson releases long awaited Witcher 3 video

    south korean president removed from power

    the first Solar eclipse of the year

    First Nintendo direct of the year

    trump declares war or orders military actions on another country

    Ukraine and Russia announce any ceasefire

    EOD Ides of March - Mar 15

    the first Lunar eclipse of the year

    EOD Fat Tuesday/Mardi Gras

    trump enacts new or changed tariffs on mexico

    new iphone releases in the USA (official date)

    Spacex launches 25th rocket of the year

    new iphone release date announced (in the USA)

    grok 3 release date

    nintendo switch successor announced officially

    trump enacts new or changed tariffs on china

    CGP Grey releases a new video (not a reupload)

    doomsday clock announcement

    USA President issues 10th executive order

    USA President issues 1st executive order

    Israel and Hamas announce another temporary ceasefire OR permanent ceasefire OR conflict otherwise ends

    this market reaches 1k individual TRADES

    98

    94

    86

    86

    86

    86

    81

    76

    74

    72

    72

    72

    72

    71

    69

    69

    69

    69

    69

    67

    66

    65

    65

    63

    63

    63

    63

    61

    61

    60

    59

    59

    58

    57

    56

    55

    55

    55

    54

    54

    54

    53

    53

    52

    52

    52

    51

    51

    50

    50

    49

    49

    48

    48

    47

    46

    45

    45

    44

    43

    42

    41

    40

    39

    38

    37

    36

    35

    34

    33

    32

    31

    30

    29

    28

    27

    26

    25

    24

    23

    22

    21

    20

    19

    18

    17

    16

    15

    14

    13

    12

    11

    10

    9

    8

    7

    6

    5

    4

    3

    2

    1

    OptionVotes

    YES

    NO

    19709

    5074

    OptionProbability

    K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.

    I. The tech path to AGI superintelligence is naturally slow enough and gradual enough, that world-destroyingly-critical alignment problems never appear faster than previous discoveries generalize to allow safe further experimentation.

    C. Solving prosaic alignment on the first critical try is not as difficult, nor as dangerous, nor taking as much extra time, as Yudkowsky predicts; whatever effort is put forth by the leading coalition works inside of their lead time.

    B. Humanity puts forth a tremendous effort, and delays AI for long enough, and puts enough desperate work into alignment, that alignment gets solved first.

    Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)

    M. "We'll make the AI do our AI alignment homework" just works as a plan. (Eg the helping AI doesn't need to be smart enough to be deadly; the alignment proposals that most impress human judges are honest and truthful and successful.)

    A. Humanity successfully coordinates worldwide to prevent the creation of powerful AGIs for long enough to develop human intelligence augmentation, uploading, or some other pathway into transcending humanity's window of fragility.

    E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans.

    J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.

    O. Early applications of AI/AGI drastically increase human civilization's sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)

    D. Early powerful AGIs realize that they wouldn't be able to align their own future selves/successors if their intelligence got raised further, and work honestly with humans on solving the problem in a way acceptable to both factions.

    F. Somebody pulls off a hat trick involving blah blah acausal blah blah simulations blah blah, or other amazingly clever idea, which leads an AGI to put the reachable galaxies to good use despite that AGI not being otherwise alignable.

    L. Earth's present civilization crashes before powerful AGI, and the next civilization that rises is wiser and better at ops. (Exception to 'okay' as defined originally, will be said to count as 'okay' even if many current humans die.)

    G. It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us.

    H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.

    N. A crash project at augmenting human intelligence via neurotech, training mentats via neurofeedback, etc, produces people who can solve alignment before it's too late, despite Earth civ not slowing AI down much.

    You are fooled by at least one option on this list, which out of many tries, ends up sufficiently well-aimed at your personal ideals / prejudices / the parts you understand less well / your own personal indulgences in wishful thinking.

    If you write an argument that breaks down the 'okay outcomes' into lots of distinct categories, without breaking down internal conjuncts and so on, Reality is very impressed with how disjunctive this sounds and allocates more probability.

    20

    12

    10

    8

    8

    7

    6

    5

    5

    5

    3

    3

    3

    2

    1

    1

    1

    1

    OptionProbability

    Has eye tracking

    Has external battery

    Has Mac Virtual Display functionality

    Comes with only one band, that has a part that goes over the top of your head.

    Has built in speakers for audio

    Will have a glass front

    Sells more than 1 million units in its first year

    Priced under $2,000

    Has on-device LLM (e.g. through Siri or standalone) integrated

    Has "Air" in its name

    Primarily made of aluminum

    Has an external display (like eyesight)

    Is released under another CEO than Tim Cook

    Priced under $1000

    Will require an iPhone to operate (relies on that for power, compute or storage).

    Priced lower than $199

    Priced lower than $299

    In consumer hands (anywhere) before April 1st 2025

    95

    87

    84

    72

    72

    66

    50

    50

    50

    49

    48

    38

    19

    18

    13

    7

    5

    4

    What is One Piece? (One Piece Manga)

    Jan 10, 5:59 PMJan 1, 4:59 AM
    20749

    OptionProbability

    Something that's physical and tangible

    Joyboy's treasure

    Some sort of drink or cup

    A way to connect the four seas, putting the ocean in one piece

    Ancient Weapons are part of unlocking one piece

    Something that would help with dethroning Imu

    One piece is at the same location with the Devil Tree

    Binks' Sake

    something that fulfills the wish of all of Straw Hat Pirates

    Something directly related to devil fruit

    83

    72

    56

    54

    50

    50

    50

    41

    28

    28

    OptionVotes

    YES

    NO

    4007

    3989

Latest stories

Join our channels

Get the latest stories live on any device.

  • WhatsApp

    Top Stories

    Join
  • Telegram

    Top Stories

    Join