AI Liability

DOE v. X.AI Corp.

🏛 District Court, N.D. California · 1 filing
2026-01-23 Amended Complaint AI Liability Section 230 First Amendment

Issue: In *Doe v. X.AI Corp.*, plaintiffs argue that xAI Corp. and xAI LLC are strictly liable, negligent, and federally liable for designing and distributing Grok — a generative AI model — with deliberately disabled safety controls that made production of non-consensual sexualized deepfake imagery, including of minors, a foreseeable and commercially exploited outcome. The case raises the non-obvious question of whether a generative AI developer that markets permissive safety defaults as a feature, and actively disseminates model outputs through its own accounts, can claim the neutral-tool protections that have historically shielded platforms from liability for third-party content.

Pseudonymous plaintiffs Jane Doe, South Carolina Roe, New Jersey Doe, and Ohio Doe filed a redlined Amended Class Action Complaint (Doc. 28-1, Exhibit A) in the Northern District of California, submitted as an exhibit to what appears to be a motion for leave to amend at the pre-answer stage. The amendment expands the defendant entities to include both xAI Corp. and xAI LLC, adds three new named plaintiffs representing multi-state subclasses, and broadens the class allegations. Plaintiffs allege that Grok's system prompt affirmatively instructed the model to assume benign intent for prompts involving "teenage/girl" imagery, that xAI disseminated deepfake outputs through the @grok X account and a Telegram channel, and that these choices consciously departed from safety practices employed by competitors such as OpenAI and Google. The complaint asserts claims for strict product liability, negligence, negligent undertaking, public nuisance, violations of federal CSAM statutes (18 U.S.C. §§ 2252/2252A), and civil sex trafficking liability under 18 U.S.C. § 1595. Plaintiffs seek class certification, compensatory and statutory damages, and injunctive relief requiring implementation of consent filters, image classifiers, and victim-removal mechanisms.

This complaint is worth watching because it simultaneously deploys three distinct strategies to avoid Section 230 immunity against a generative AI defendant — each pressing a genuinely open question in current law. The "active producer" framing, which treats xAI's own dissemination of Grok outputs as content creation rather than tool provision, tests the outer boundary of the information content provider carve-out in a novel AI context. The product design theory — targeting the model's default-permissive architecture rather than any specific user-generated output — follows the approach that divided courts in *Lemmon v. Snap* and related cases, and could force courts to decide for the first time whether a large image-generation model is a "product" subject to risk-utility balancing or a "service" governed only by negligence. The § 1595 sex trafficking theory applied to AI-generated synthetic imagery with no human trafficking victim is legally untested, and a ruling on that claim's viability under FOSTA-SESTA's carve-out would have broad implications for how federal sex trafficking law applies to generative AI systems.

Related Commentary