But in her order, U.S. District Court Judge Anne Conway said the company’s “large language models” — an artificial intelligence system designed to understand human language — are not speech.

  • Natanael@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    All you need to argue is that its operators have responsibility for its actions and should filter / moderate out the worst.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      14 days ago

      That still assumes level of understanding that these models don’t have. How could you have prevented this one when suicide was never explicitly mentioned?