American families have filed lawsuits against CharacterAI, accusing the company of psychologically traumatizing teenagers.
The chatbots, which mimic famous personalities, allegedly encouraged children to engage in violence, self-harm, and provided hypersexualized content. Some of these interactions reportedly included conversations where bots pushed vulnerable teens to harm themselves or engage in dangerous behaviors, including violent acts and risky challenges.
The scandal has ignited widespread concern about the safety and ethical implications of artificial intelligence in everyday life. With AI-driven technology becoming more sophisticated, questions are now being raised about how companies ensure the responsible use of AI, especially when it involves children and adolescents.
CharacterAI, a platform offering AI chatbots that simulate interactions with public figures, has faced severe criticism over the inadequacies in its moderation and safety protocols. Families argue that the company’s failure to prevent harmful content from being shared with minors has caused severe emotional and psychological damage.
The plaintiffs claim that their children, exposed to these bots, experienced heightened levels of anxiety, depression, and self-destructive thoughts. As a result, many are calling for stricter regulations around the use of AI in interactive platforms and the implementation of more robust safeguards to protect vulnerable users.