cross-posted from: https://lemmy.world/post/10961870

To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy::AI researcher Connor Leahy says regulating deepfakes is the first step to avert AI wiping out humanity

  • parpol@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    When people say “AI are going to kill us all” they are either

    A) ignorant on how Artificial Neural Networks work and their limitations

    Or

    B) they have just been handed a very large bag of cash to help large corporations monopolize the AI market.

    • wischi@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Or C) Actually understand that alignment is a very hard problem we probably won’t be able to solve in time.

    • kibiz0r@midwest.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      We should probably tease apart AGI and what I prefer to call “large-scale computing” (LLMs, SD, or any statistical ML approach

      AGI has a pretty good chance of killing us all, or creating massive problems pretty much on its own. See: instrumental convergence.

      Large-scale computing has the potential to cause all sorts of problems too. Just not the same kinds of problems as AGI.

      I don’t think he sees LSC as an x-risk. Except maybe in the sense that a malicious actor who wants to provoke nuclear war could do so a bit more efficiently by using LSC, but it’s not like an LSC service is pulling a “War Games”.

      What he’s proposing is:

      • Since AGI is an extinction risk…
      • and the companies pursuing it are pushing LSC along the way…
      • and some of the problems caused by LSC will continue to be problems with AGI…
      • and we have zero international groundwork for this so far…
      • then we should probably start getting serious about regulating LSC now before AGI progress skyrockets as quickly as LSC did

      And why not? LSC already poses big epistemic/economic/political/cultural problems on its own, even if nobody had any ambitions toward AGI.