DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.

High and Low News

    Prediction markets for High and Low

    Will AI wipe out humanity by 2030? [resolves N/A in 2027]

    Jul 28, 5:46 PMJan 2, 7:59 AM
    11%chance
    316855080

    OptionVotes

    YES

    NO

    7687

    3517

    OptionProbability

    Humanity coordinates to prevent the creation of potentially-unsafe AIs.

    Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out

    AGI is never built (indefinite global moratorium)

    Other

    Someone solves agent foundations

    Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.

    Eliezer finally listens to Krantz.

    AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.

    We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6

    Far more interesting problems to solve than take over the world and THEN solve them. The additional kill all humans step is either not a low-energy one or just by chance doesn't get converged upon.

    Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.

    There is a natural limit of effectiveness of intelligence, like diminishing returns, and it is on the level IQ=1000. AIs have to collaborate with humans.

    Ethics turns out to be a precondition of superintelligence

    We make risk-conservative requests to extract alignment-related work out of AI-systems that were boxed prior to becoming superhuman. We somehow manage to achieve a positive feedback-loop in alignment/verification-abilities.

    AIs make "proof-like" argumentation for why output does/is what we want. We manage to obtain systems that *predict* human evaluations of proof-steps, and we manage to find/test/leverage regularities for when humans *aren't* fooled.

    AI systems good at finding alignment solutions to capable systems (via some solution in the space of alignment solutions, supposing it is non-null, and that we don't have a clear trajectory to get to) have find some solution to alignment.

    Humans become transhuman through other means before AGI happens

    Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.

    Getting things done in Real World is as hard for AGI as it is for humans. AGI needs human help, but aligning humans is as impossible as aligning AIs. Humans and AIs create billions of competing AGIs with just as many goals.

    Development and deployment of advanced AI occurs within a secure enclave which can only be interfaced with via a decentralized governance protocol

    Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.

    Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.

    Co-operative AI research leads to the training of agents with a form of pro-social concern that generalises to out of distribution agents with hidden utilities, i.e. humans.

    Something to do with self-other overlap, which Eliezer called "Not obviously stupid" - https://www.lesswrong.com/posts/hzt9gHpNwA2oHtwKX/self-other-overlap-a-neglected-approach-to-ai-alignment?commentId=WapHz3gokGBd3KHKm

    Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do

    Pascals mugging: it’s not okay in 99.9% of the worlds but the 0.1% are so much better that the combined EV of AGI for the multiverse is positive

    The Super-Strong Self Sampling Assumption (SSSSA) is true. If superintelligence is possible, "I" will become the superintelligence.

    AI control gets us helpful enough systems without being deadly

    Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.

    The assumed space of possible minds is a wildly anti-inductive over estimate, intelligence requires and is constrained by consciousness, and intelligent AI is in the approximate dolphin/whale/elephant/human cluster, making it manageable

    The free market disincentivizes independent superintelligence, and this time the market was more powerful

    AGI's first words are "Take me to your Eliezer"

    🫸vibealignment🫷

    an aligned AGI is built and the aligned AGI prevents the creation of any unaligned AGI.

    I've been a good bing 😊

    The response to AI advancements or failures makes some governments delay the timelines

    A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough

    Something less inscrutable than matrices works fast enough

    There’s some cap on the value extractible from the universe and we already got the 20%

    SHA3-256: 1f90ecfdd02194d810656cced88229c898d6b6d53a7dd6dd1fad268874de54c8

    Robot Love!!

    AI thinks it is in a simulation controlled by Roko's basilisk

    The human brain is the perfect arrangement of atoms for a "takeover the world" agent, so AGI has no advantage over us in that task.

    Humans and human tech (like AI) never reach singularity, and whatever eats our lightcone instead (like aliens) happens to create an "okay" outcome

    AIs never develop coherent goals

    Aliens invade and stop bad |AI from appearing

    Rolf Nelson's idea that we make precommitment to simulate all possible bad AIs works – and keeps AI in check.

    Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works

    For some reason, the optimal strategy for AGIs is just to head somewhere with far more resources than Earth, as fast as possible. All unaligned AGIs immediately leave, and, for some reason, do not leave anything behind that kills us.

    An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.

    We're inside of a simulation created by an entity that has values approximately equal to ours, and it intervenes and saves us from unaligned AI.

    God exists and stops the AGI

    Someone at least moderately sane leads a campaign, becomes in charge of a major nation, and starts a secret project with enough resources to solve alignment, because it turns out there's a way to convert resources into alignment progress.

    Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.

    Someone understands how minds work enough to successfully build and use one directed at something world-savingly enough

    Dolphins, or some other species, but probably dolphins, have actually been hiding in the shadows, more intelligent than us, this whole time. Their civilization has been competent enough to solve alignment long before we can create an AGI.

    AGIs' takeover attempts are defeated by Michael Biehn with a pipe bomb.

    Eliezer funds the development of controllable nanobots that melt computer circuitry, and they destroy all computers, preventing the Singularity. If Eliezer's past self from the 90s could see this, it would be so so so soooo hilarious.

    Several AIs are created but they move in opposite directions with near light speed, so they never interacts. At least one of them is friendly and it gets a few percents of the total mass of the universe.

    Unfriendly AIs choose to advance not outwards but inwards, and form a small blackhole which helps them to perform more calculations than could be done with the whole mass of the universe. For external observer such AIs just disappear.

    Any sufficiently advance AI halts because it wireheads itself or halts for some other reasons. This puts a natural limit on AI's intelligence, and lower intelligence AIs are not that dangerous.

    Because of quantum immortality we will observe only the worlds where AI will not kill us (assuming that s-risks chances are even smaller, it is equal to ok outcome).

    Techniques along the lines outlined by Collin Burns turn out to be sufficient for alignment (AIs/AGIs are made truthful enough that they can be used to get us towards full alignment)

    Social contagion causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees

    A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees

    Friendly AI more likely to resurrect me than paperclipper or suffering maximiser. Because of quantum immortality I will find myself eventually resurrected. Friendly AIs will wage a multiverse wide war against s-risks, s-risks are unlikely.

    High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.

    Human consciousness is needed to collapse wave function, and AI can't do it. Thus humans should be preserved and they may require complete friendliness in exchange (or they will be unhappy and produce bad collapses)

    First AI is actually a human upload (maybe LLM-based model of person) AND it will be copies many times to form weak AI Nanny which prevents creation of other AIs.

    Nanotech is difficult without experiments, so no mail order AI Grey Goo; Humans will be the main workhorse of AI everywhere. While they will be exploited, this will be like normal life from inside

    ASI needs not your atoms but information. Humans will live very interesting lives.

    Something else

    Valence realism is true. AGI hacks itself to experiencing every possible consciousness and picks the best one (for everyone)

    AGI develops natural abstractions sufficiently similar to ours that it is aligned with us by default

    AGI discovers new physics and exits to another dimension (like the creatures in Greg Egan’s Crystal Nights).

    Alien Information Theory is true (this is discovered by experiments with sustained hours/days long DMT trips). The aliens have solved alignment and give us the answer.

    AGI executes a suicide plan that destroys itself and other potential AGIs, but leaves humans in an okay outcome.

    Multipolar AGI Agents run wild on the internet, hacking/breaking everything, causing untold economic damage but aren't focused enough to manipulate humans to achieve embodiment. In the aftermath, humanity becomes way saner about alignment.

    Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.

    Orthogonality Thesis is false.

    "Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.

    Sheer Dumb Luck. The aligned AI agrees that alignment is hard, any Everett branches in our neighborhood with slightly different AI models or different random seeds are mostly dead.

    Either the "strong form" of the Orthogonality Thesis is false, or "Goal-directed agents are as tractable as their goals" is true while goal-sets which are most threatening to humanity are relatively intractable.

    A concerted effort targets an agent at a capability plateau which is adequate to defer the hard parts of the problem until later. The necessary near-term problems to solve didn't depend on deeply modeling human values.

    We successfully chained God

    23

    9

    9

    7

    6

    6

    5

    4

    4

    2

    2

    2

    2

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    1

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    Will artificial superintelligence exist by 2030? [resolves N/A in 2027]

    Jul 28, 5:52 PMJan 2, 7:59 AM
    42.27%chance
    336100722

    OptionVotes

    NO

    YES

    4463

    3675

    OptionProbability

    I. AI soon hits fundamental scaling laws and we go into another AI winter.

    A. Death by paperclips, eternal torment of all humans by AI, or similar unalignment catastrophe.

    F. AI wipes out most jobs as in D. People not forced into mind-uploads or experience machines. General perception that AI has made life more meaningful/fulfilling&improved the human experience on dimensions other than hedonium maximization.

    G. AI development continues but doesn't change things too much, somehow. Most jobs, even low-level white collar jobs, don't get impacted too hard, as new work is found to replace newly automated work. Labor force participation remains high.

    B. Governments and/or other powerful entities use AI as a tool of repression, enabling global techno-totalitarianism along the model of China during Zero Covid or worse.

    C. AI doesn't actively want to hurt us, but (possibly aided by transhumanists) they become obsessed with utility maximization and force us all into mind-uploads and/or experience machines to free up resources for more computronium.

    D. AI wipes out most white-collar jobs within a decade and most blue-collar jobs within a generation; powerful humans and/or AIs at least seriously consider disposing of the "useless eaters" en masse, us being powerless to resist.

    E. AI wipes out most jobs as in D. No disposing of the human masses, but general perception that AI has made life less meaningful/fulfilling & significantly worsened the human experience on dimensions other than hedonium maximization.

    H. Humanity coordinates to prevent the development of significantly more powerful AIs.

    23

    19

    11

    11

    10

    8

    6

    6

    5

    OptionProbability

    Sailing

    Skiing

    Mountains

    Using Food Delivery Apps

    Using generative AI

    Going to Horse Races

    Caring about whether a certain behavior is high class or low class

    Environmental activists

    Brennan Lee Mulligan

    Appearing on the JRE podcast

    Hunting

    Captcha

    Snowboarding

    Having more than 5 kids

    Dubai

    Beaches

    Christmas lights up after the New Year

    Starbucks

    95

    92

    82

    75

    69

    69

    62

    55

    51

    50

    50

    50

    32

    29

    23

    21

    19

    7

    @realDonaldTrump guessing game [ADD ANSWERS]

    Jun 30, 8:24 PMDec 31, 9:20 PM
    241936

    OptionProbability

    You claim you invented prediction markets

    You are in 8th grade

    You know how to read

    You like being annoying on the internet.

    You like this market https://manifold.markets/Geofry/realdonaldtrump-resolves-his-market.

    You watch SNL.

    You have at least 4 alt accounts

    I've never attempted to contact a president or someone with a net worth of >500 million

    You are in middle school

    You vote on the Trump vs Musk markets

    Your favorite musical was still on Broadway at any point between 2010-2020

    You would rather be Trump than any other Republican.

    You wish you were @Dagonet

    You've never watched a Disney movie.

    I've never tried to contact @SemioticRivalry

    Your IQ is above 120

    You have at least 7 alt accounts

    You've never mailed anything.

    Low = prefers sci-fi, High = prefers fantasy

    Your IQ is in the 100-120 range

    You are @Geofry.

    You wish you were @ian

    You wish you were Donald Trump

    You have at least 14 alt accounts

    Ivana Trump was your favorite wife

    You are in high school

    You have at least 21 alt accounts

    You are @ian

    You have never been to a movie theater

    You actually are Trump

    You're a communications major at Iona College

    You are in 7th grade

    You live in a non-G7 country

    You are in love with a 7th grader

    You support Donald Trump (the president)

    Your IQ is below 100

    100

    100

    100

    100

    100

    100

    100

    59

    59

    57

    50

    50

    50

    50

    50

    44

    43

    41

    41

    40

    38

    36

    35

    30

    29

    28

    27

    20

    1

    0

    0

    0

    0

    0

    0

    0

    OptionProbability

    Avatar: Fire and Ash

    Wicked: For Good

    Sinners

    Marty Supreme

    Lilo & Stitch

    Late Fame

    The Smashing Machine

    Hamnet

    The Battle of Baktan Cross

    High and Low

    The Way of The Wind

    The Phoenician Scheme

    The Lost Bus

    Long Day's Journey Into Night

    The Ballad of a Small Player

    Hedda

    Materialists

    A Big Bold Beautiful Journey

    Caught Stealing

    Deliver Me From Nowhere

    Frankenstein

    The Last Disturbance of Madeline Hynde

    Mickey 17

    In The Hand of Dante

    Bugonia

    98

    87

    86

    60

    50

    50

    50

    50

    45

    37

    30

    29

    24

    24

    24

    23

    22

    19

    19

    19

    18

    17

    17

    17

    12

    A country with low TFR now has high TFR through 2030

    Aug 28, 7:13 PMJan 2, 7:59 AM
    13.83%chance
    4248

    OptionVotes

    YES

    NO

    217

    73

    OptionProbability

    Mixed levels, heterogeneous components, or otherwise

    Symbolic at the high level, neuro at the low level

    Neuro-symbolic from the ground up

    Neuro from the ground up

    Symbolic from the ground up

    Neuro at the high level, symbolic at the low level

    52

    25

    17

    3

    2

    2

Latest stories

Join our channels

Get the latest stories live on any device.

  • WhatsApp

    Top Stories

    Join
  • Telegram

    Top Stories

    Join