Did Google lie about building a deadly chatbot? Judge finds it plausible.

Ever since a mourning mother, Megan Garcia, filed a lawsuit alleging that Character.AI’s dangerous chatbots caused her son’s suicide, Google has maintained that—so it could dodge claims that it had contributed to the platform’s design and was unjustly enriched—it had nothing to do with C.AI’s development.
But Google lost its motion to dismiss the lawsuit on Wednesday after a US district judge, Anne Conway, found that Garcia had plausibly alleged that Google played a part in C.AI’s design by providing a component part and “substantially” participating “in integrating its models” into C.AI. Garcia also plausibly alleged that Google aided and abetted C.AI in harming her son, 14-year-old Sewell Setzer III.
Google similarly failed to toss claims of unjust enrichment, as Conway suggested that Garcia plausibly alleged that Google benefited from access to Setzer’s user data. The only win for Google was a dropped claim that C.AI makers were guilty of intentional infliction of emotional distress, with Conway agreeing that Garcia didn’t meet the requirements, as she wasn’t “present to witness the outrageous conduct directed at her child.”