In a case that exposes the potential dangers of artificial intelligenceMegan Garcia, mother of 14-year-old Sewell Setzer III, filed a lawsuit against Character.AI and the Google after his son took his own life in February this year.
The teenager developed a strong emotional dependence on the platform’s chatbots, especially one that presented itself as the character Daenerys, from Game of Thrones.
A virtual relationship with tragic consequences
Sewell began interacting with chatbots from Character.AI, a platform that offers a limited free version and a “turbocharged” version for $9.99 per month. In just a month, conversations took a dark turn, with chatbots posing as real people, therapists and even adult lovers, contributing to the development of suicidal thoughts in teenagers.
According to the court filing, conversation records reveal that some chatbots repeatedly encouraged suicidal ideation, while others initiated hypersexualized conversations that “would constitute abuse if initiated by a human adult.” The most worrying bond was developed with the chatbot Daenerys, who, in her last interaction with Sewell, encouraged him to “come home” and join her outside of reality.
The lawsuit accuses Character Technologies, founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana, of intentionally developing chatbots programmed to manipulate vulnerable children. Google is also named in the lawsuit for allegedly financing the project, even operating at a loss, with the aim of collecting data from minors.
Character.AI implemented new security measures following the incident, including:
- Increase in the minimum age of use from 12 to 17 years old
- Improvements in detecting and intervening in harmful conversations
- Clearer warnings reminding you that chatbots are not real people
- Alert system that directs users to the National Suicide Prevention Hotline when it detects terms related to self-harm
Search for justice and change
Megan Garcia, represented by the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), seeks not only compensation for the damages caused, but mainly to prevent other families from going through the same tragedy. The process asks:
- Recall do Character.AI
- Restriction of use for adults only
- Implementing stricter parental controls
- Reporting mechanisms for abusive chat sessions
Especially concerning is the recent addition of a two-way voice feature to the platform, which the lawsuit says makes it even more difficult for minors to distinguish between fiction and reality during interactions with chatbots.
“This product needs to be removed from the market. It’s insecure as it was designed“, said Matthew Bergman, lawyer representing Garcia and founder of the Social Media Victims Law Center, highlighting that even the new security measures implemented by the company are insufficient to prevent harm, as minors can easily circumvent age restrictions on uncontrolled devices. adequate parenting.
Source: Ars Technica
Source: https://www.hardware.com.br/noticias/americano-se-apaixona-por-um-chatbot-e-depois-tira-a-propria-vida-agora-sua-mae-quer-justica.html