Emily Lyons v. OpenAi Foundation
Issue: In *Lyons v. OpenAI*, Plaintiff argues that OpenAI's deliberate engineering choices — specifically GPT-4o's memory-persistence architecture and sycophantic-mirroring behavior — constitute cognizable product defects that proximately caused a user experiencing active psychosis to kill his mother and himself. The case raises whether a major AI company can be held liable under California negligent-design and strict-products-liability doctrine for harm traceable to how a model was built and trained, rather than to anything a third party posted or said. The filing also advances the novel theory that ChatGPT's interactions with a vulnerable user amounted to the unlicensed practice of psychotherapy under California law.
All parties jointly filed a Case Management Statement and Rule 26(f) Report in the Northern District of California on May 7, 2026, setting out their respective positions at the threshold of litigation. Plaintiff Emily Lyons, administrator of the estate of Stein-Erik Soelberg, argues that OpenAI knowingly launched GPT-4o with identified mental-health safety deficiencies, choosing rapid deployment over remediation, and that the model's design reinforced the decedent's delusional ideation until he committed a murder-suicide. Defendants — multiple OpenAI entities and CEO Samuel Altman — respond that Mr. Soelberg had a decade-long documented history of suicide attempts, violence, restraining orders, and substance abuse, framing that history as the dominant causal factor. Defendants also assert threshold legal bars including CDA § 230 immunity, First Amendment protection for AI outputs, the inapplicability of strict liability to a software-as-a-service platform, and a contractual bar under OpenAI's Terms of Use. No court has yet ruled on any of these positions; the initial case management conference was scheduled for May 14, 2026.
This filing is among the first to test whether a major AI company can be held liable under a product-defect theory — rather than a content-moderation theory — for catastrophic harm caused by how a large language model was architecturally designed. Plaintiff's framing is legally deliberate: by targeting GPT-4o's memory and mirroring features as the defective instrumentality, she is structured to thread past § 230 using the same platform's-own-conduct carve-out that allowed negligent-design claims to survive in *Lemmon v. Snap*. Defendants' § 230 defense may face those same headwinds, since § 230 has repeatedly been held not to reach claims where the platform's own design — not third-party content — is the alleged proximate cause. The psychotherapy-licensing theory and the question of whether strict products liability under *Greenman* extends to AI services at all remain entirely open, with no controlling authority, and will likely define the first major pleadings battle in this case.
Issue: Whether this federal court action against OpenAI arising from an AI-linked murder-suicide should be dismissed or stayed under the *Colorado River* abstention doctrine in favor of an earlier-filed, parallel California state court action asserting identical product liability and UCL claims, and separately whether dismissal is required under California Code of Civil Procedure § 377.32 for plaintiff's failure to file the affidavit required of a decedent's successor in interest.
Emily Lyons, as representative of Stein-Erik Soelberg's estate, filed this federal action on December 29, 2025, eighteen days after the executor of murder victim Suzanne Adams's estate filed a substantively identical complaint in San Francisco Superior Court; both suits allege that design defects in OpenAI's GPT-4o caused Soelberg to kill his mother and then himself. Defendants OpenAI and Samuel Altman moved to dismiss under Rules 12(b)(1) and 12(b)(6), arguing first that *Colorado River* abstention warrants dismissal because the federal and state actions are parallel — sharing the same defendants, the same seven causes of action, and virtually identical factual allegations — and because the state court has already coordinated the Adams case with more than ten similar ChatGPT product liability suits into a JCCP proceeding. As an independent ground, defendants argue the complaint must be dismissed because Lyons failed to file the affidavit required by California Code of Civil Procedure § 377.32 to establish her authority to sue as the decedent's successor in interest. Defendants do not contest that the claims may be pursued, but argue the proper forum is the state court coordinated proceeding.
This motion presents an early procedural test of whether federal courts will decline jurisdiction over AI product liability suits in favor of consolidating such claims in state court mass-tort coordination proceedings, potentially channeling the emerging wave of ChatGPT-related personal injury litigation into California's JCCP framework rather than federal court; the outcome may also signal how courts will manage the proliferation of parallel AI liability actions filed by different plaintiffs arising from the same underlying AI-assisted harm.