The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character . AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”
Absolutely not. As unsavory as that might be (and yes I’m aware of Commonwealth v. Carter) it absolutely should be protected speech. This is a matter of personal responsibility and frankly a matter of personal autonomy. One’s life is to do with as they please, without fear of reprisal.
I’m a little confused by your comment. Do you think I’m blaming the kid? Or do you think it’s ok to talk someone into killing themselves, because the victim’s personal autonomy absolves them of responsibility?
I do not think you’re blaming the kid. I’m saying that encouraging suicide should be protected speech, even though it isn’t always. And one of the reasons I believe that is personal autonomy, though that isn’t the only reason.