Browse Cases

147 results
Clear
Section 230
Brief First Amendment Section 230 Complaint

Netchoice v. Wilson

District Court, D. South Carolina · 2026-02-09 · NetChoice (trade association representing social media platforms and internet companies)

Issue: Whether the South Carolina Age-Appropriate Code Design Act's requirements that covered online services exercise "reasonable care" to prevent harms to minors, disable certain engagement and discovery features, screen third-party advertising, and submit to third-party audits violate the First Amendment's prohibitions on content-based speech restrictions and compelled speech, are preempted by §230(c)(1) of the Communications Decency Act and COPPA, and violate the Commerce Clause and Due Process Clause.

Why It Matters: This complaint extends a growing line of coordinated First Amendment challenges by NetChoice to state-level online minor-protection laws, directly invoking *Moody v. NetChoice* and Fourth Circuit precedent to argue that platform curation and algorithmic editorial judgment are categorically protected expression, which, if adopted by the court, would significantly constrain states' ability to regulate platform design features affecting speech.

View on CourtListener →
Exhibit AI Liability Section 230 First Amendment Amended Complaint

DOE v. X.AI Corp.

District Court, N.D. California · 2026-01-23 · xAI Corp. / xAI LLC (Grok)

Issue: In *Doe v. X.AI Corp.*, plaintiffs argue that xAI Corp. and xAI LLC are strictly liable, negligent, and federally liable for designing and distributing Grok — a generative AI model — with deliberately disabled safety controls that made production of non-consensual sexualized deepfake imagery, including of minors, a foreseeable and commercially exploited outcome. The case raises the non-obvious question of whether a generative AI developer that markets permissive safety defaults as a feature, and actively disseminates model outputs through its own accounts, can claim the neutral-tool protections that have historically shielded platforms from liability for third-party content.

Why It Matters: This complaint is worth watching because it simultaneously deploys three distinct strategies to avoid Section 230 immunity against a generative AI defendant — each pressing a genuinely open question in current law. The "active producer" framing, which treats xAI's own dissemination of Grok outputs as content creation rather than tool provision, tests the outer boundary of the information content provider carve-out in a novel AI context. The product design theory — targeting the model's default-permissive architecture rather than any specific user-generated output — follows the approach that divided courts in *Lemmon v. Snap* and related cases, and could force courts to decide for the first time whether a large image-generation model is a "product" subject to risk-utility balancing or a "service" governed only by negligence. The § 1595 sex trafficking theory applied to AI-generated synthetic imagery with no human trafficking victim is legally untested, and a ruling on that claim's viability under FOSTA-SESTA's carve-out would have broad implications for how federal sex trafficking law applies to generative AI systems.

View on CourtListener →
Section 230

St. Clair v. X.AI Holdings Corp.

District Court, S.D. New York · 2 filings
2026-01-15 · Opposition to Motion for Summary Judgment

Why It Matters: This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.

View on CourtListener →
2026-01-15 · Motion for Temporary Restraining Order

Why It Matters: This motion directly tests whether Section 230 immunity extends to content affirmatively generated by an AI system — as opposed to merely hosted third-party content — a question with broad implications for AI developer liability; if the court accepts plaintiff's framing that AI-generated output constitutes the developer's own content, it could establish a significant precedent foreclosing Section 230 as a defense for generative AI systems and accelerating civil liability exposure for AI developers under existing tort and statutory frameworks.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Welkin v. Meta Platforms, Inc.

District Court, N.D. Georgia · 2026-01-12 · Meta Platforms, Inc. (Facebook)

Issue: Whether §230(c) of the Communications Decency Act immunizes Meta from an IIED claim and request for injunctive relief arising from Meta's alleged failure to remove a third-party Facebook impersonation profile whose content Iranian authorities reportedly used as evidence in criminal proceedings against the plaintiff's mother.

Why It Matters: The motion squarely tests whether §230(c) shields a platform from tort liability and injunctive relief when a plaintiff alleges harm flowing not from the platform's affirmative conduct but from its editorial decision to only partially remove third-party content flagged as an impersonation account, potentially reinforcing the breadth of publisher immunity for content-moderation decisions short of complete removal.

View on CourtListener →
Opinion Section 230 First Amendment Appellate Opinion

SNAP, INC. v. THE EIGHTH JUDICIAL DISTRICT COURT OF THE STATE

Nev: Supreme Court · 2026 · Snap, Inc. (Snapchat)

Issue: Whether Section 230 of the Communications Decency Act bars the State of Nevada's claims under the Nevada Deceptive Trade Practices Act (NDTPA), and whether the First Amendment precludes the State's negligence claim against Snapchat.

Why It Matters: This decision represents a significant development in the intersection of Section 230 immunity, First Amendment protection, and state enforcement actions against social media platforms. The court's conclusion that negligence claims can proceed despite First Amendment concerns, while consumer protection claims remain Section 230-barred, suggests courts may be creating new pathways for platform liability through traditional tort theories that avoid Section 230's broad publisher immunity shield—particularly relevant given the Garcia v. Character.AI framework for product liability claims against technology platforms.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Ridley v. Sweepsteaks Ltd.

District Court, E.D. Virginia · 2025-12-31 · Kick Streaming Pty Ltd.

Issue: In *Ridley v. Sweepsteaks Ltd.*, defendant Kick Streaming Pty Ltd. argues that an Australian livestreaming company cannot be haled into a Virginia court on the basis that its platform is globally accessible, that Section 230 of the Communications Decency Act immunizes it from liability for promotional content created and broadcast by third-party celebrity streamers, and that RICO and Virginia Consumer Protection Act claims fail where no predicate act or misrepresentation is specifically attributable to Kick. The non-obvious tension is whether a platform that allegedly structured and funded eight-figure contracts with U.S. celebrities for the express purpose of directing American audiences to a gambling site is a passive host at all — or something closer to a co-architect of the promotional scheme.

Why It Matters: Kick's motion presents one of the clearest judicial tests yet of whether a streaming platform that pays celebrities to advertise a specific third-party service crosses from passive host into co-developer of commercial deception — a question that would strip Section 230 immunity under the *Roommates.com* material-contribution framework but remains unresolved in the Fourth Circuit. The personal jurisdiction argument also raises an unsettled question about how *Walden*'s defendant-focused purposeful-availment analysis applies when a platform's commercial targeting of U.S. consumers is executed through third-party human agents rather than the platform's own direct contacts. If a court finds the passive-host analogy inapt on these facts, this case could become a vehicle for the Fourth Circuit to address paid promotional contracting as a Section 230 immunity disqualifier — a development with significant consequences for influencer-driven marketing across major streaming platforms.

View on CourtListener →
AI Liability

DOE v. OPENAI, LP

District Court, District of Columbia · 2 filings
2025-12-30 · Other

Why It Matters: Insufficient text to determine. --- Note: The document submitted contains only page-header metadata (case number, document number, and page citations for all 28 pages of Document 10 in Case 1:25-cv-04564) but no actual text content from the filing. None of the substantive allegations, arguments, rulings, or procedural history are visible in the provided excerpt. A complete and accurate summary cannot be prepared without the underlying text.*

View on CourtListener →
2025-12-30 · Complaint

Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.

View on CourtListener →
Other Filing AI Liability Section 230 First Amendment Other

Emily Lyons v. OpenAi Foundation

District Court, N.D. California · 2025-12-29 · OpenAI (ChatGPT)

Issue: In *Lyons v. OpenAI*, Plaintiff argues that OpenAI's deliberate engineering choices — specifically GPT-4o's memory-persistence architecture and sycophantic-mirroring behavior — constitute cognizable product defects that proximately caused a user experiencing active psychosis to kill his mother and himself. The case raises whether a major AI company can be held liable under California negligent-design and strict-products-liability doctrine for harm traceable to how a model was built and trained, rather than to anything a third party posted or said. The filing also advances the novel theory that ChatGPT's interactions with a vulnerable user amounted to the unlicensed practice of psychotherapy under California law.

Why It Matters: This filing is among the first to test whether a major AI company can be held liable under a product-defect theory — rather than a content-moderation theory — for catastrophic harm caused by how a large language model was architecturally designed. Plaintiff's framing is legally deliberate: by targeting GPT-4o's memory and mirroring features as the defective instrumentality, she is structured to thread past § 230 using the same platform's-own-conduct carve-out that allowed negligent-design claims to survive in *Lemmon v. Snap*. Defendants' § 230 defense may face those same headwinds, since § 230 has repeatedly been held not to reach claims where the platform's own design — not third-party content — is the alleged proximate cause. The psychotherapy-licensing theory and the question of whether strict products liability under *Greenman* extends to AI services at all remain entirely open, with no controlling authority, and will likely define the first major pleadings battle in this case.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Other

Carreyrou v. Anthropic PBC

District Court, N.D. California · 2025-12-22 · Anthropic (Claude AI)

Issue: Whether Anthropic, Google, Meta, xAI, Perplexity, Apple, NVIDIA, and OpenAI are liable under the Copyright Act for willful infringement by downloading plaintiffs' copyrighted books from shadow libraries (including LibGen, Z-Library, Anna's Archive, and The Pile/Books3) and reproducing those works during LLM training, preprocessing, and fine-tuning without license or permission.

Why It Matters: This complaint advances the unsettled question of whether the use of pirated training datasets constitutes willful copyright infringement by LLM developers at each stage of the AI development pipeline, potentially establishing that liability attaches not only at initial download but also at preprocessing, deduplication, and iterative fine-tuning; the plaintiffs' deliberate individual-action strategy, if successful, could foreclose industry efforts to resolve mass AI copyright claims through low-value class settlements.

View on CourtListener →
AI Liability

D.W. v. Character Technologies, Inc.

District Court, E.D. Virginia · 2 filings
2025-12-19 · Complaint

Why It Matters: Insufficient text to determine the specific legal theories advanced or the precise harms alleged; however, the filing represents a civil action directly targeting an AI chatbot developer for user harms, which could contribute to the developing body of litigation testing the boundaries of tort and product liability frameworks as applied to conversational AI systems.

View on CourtListener →
2025-12-19 · Complaint

Why It Matters: The complaint's explicit framing of a generative AI chatbot as a standalone "product" subject to traditional products liability doctrine — rather than as an interactive computer service shielded by Section 230 — directly advances the unsettled question of whether strict liability design-defect and failure-to-warn claims against AI developers can survive Section 230 and First Amendment challenges, potentially setting precedent on how courts classify AI-generated outputs for tort liability purposes.

View on CourtListener →
Section 230

In re: Roblox Corporation Child Sexual Exploitation and Assault Litigation

District Court, N.D. California · 4 filings
2025-12-12 · Discovery Order

Why It Matters: Roblox is among the largest platforms used by minors, and this MDL will test whether legal theories forged in social-media-addiction cases can survive transplantation into the more demanding context of child sexual exploitation, where FOSTA-SESTA imposes a knowledge-and-benefit standard that operates independently of and in addition to any product-design theory. The discovery fight being constructed here functions as a proxy for the broader merits battle: if Plaintiffs succeed in compelling early production of state-investigation materials before Roblox can litigate its § 230 defenses, they will have established a procedural posture that significantly advantages the litigation going forward. If the court adopts Plaintiffs' framework, it will implicitly answer — at least at the discovery stage — whether FOSTA-SESTA's exception forecloses § 230-based objections from the case's outset, a ruling that could be cited across other CSEA platform litigations nationwide.

View on CourtListener →
2025-12-12 · Other

Why It Matters: The order signals that courts may decline to allow §230 to function as a shield against early discovery in algorithmic-harm litigation, particularly where the claims are framed as product design liability rather than publisher liability for third-party content — a framing with direct relevance to the Roblox proceeding in which this document was filed as an exhibit.

View on CourtListener →
2025-12-12 · Motion to Dismiss

Why It Matters: This MDL consolidates a large volume of child sexual exploitation claims against major platforms and will require the court to rule on the outer boundaries of §230 immunity and First Amendment protection for content moderation in the context of minor-safety harms—an area where circuit courts have generally upheld immunity but public and legislative pressure to narrow it is intense. The court's resolution of whether algorithmic and editorial decisions by platforms constitute protected expression under *Moody*, and whether §230 bars claims framed as product liability or negligent design rather than publisher liability, could significantly shape the litigation landscape for platform child-safety suits nationwide.

View on CourtListener →
Brief Section 230 First Amendment Complaint

Doe S.F. v. Roblox Corporation

District Court, N.D. California · 2025-12-08 · Roblox Corporation

Issue: Whether Roblox Corporation is liable under negligence, products liability, and consumer protection theories for allegedly defective platform design—specifically the absence of age verification, identity screening, and effective parental controls—that enabled an adult predator to groom and sexually exploit a 13-year-old minor user, and whether §230 of the Communications Decency Act bars those claims.

Why It Matters: The case tests whether product-design and failure-to-warn theories targeting a platform's architectural choices—such as self-reported age fields, default open-messaging settings, and the absence of verification tools—can survive §230 immunity by being framed as claims arising from the defendant's own conduct rather than third-party content, a distinction that remains actively contested across circuits and is central to ongoing efforts to impose platform liability for child exploitation harms.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

The New York Times Company v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-05 · Perplexity AI

Issue: Whether Perplexity AI's unauthorized scraping, copying, and redistribution of copyrighted journalistic content through its retrieval-augmented generation (RAG) "answer engine" products constitutes copyright infringement under the Copyright Act, 17 U.S.C. § 101 et seq., and whether Perplexity's attribution of AI-generated "hallucinations" and content with undisclosed omissions to The New York Times constitutes trademark infringement and false designation of origin under the Lanham Act, 15 U.S.C. § 1051 et seq.

Why It Matters: This complaint directly tests whether copyright law's input/output analytical framework applies to RAG-based AI systems — potentially establishing that liability can attach at both the training/indexing stage and the generation stage — and separately advances the question of whether AI hallucinations falsely attributed to a known news brand constitute actionable trademark infringement and false designation of origin under the Lanham Act, a theory with broad implications for AI developer liability in the media context.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Motion to Dismiss

Chicago Tribune Company, LLC v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-12-04 · Perplexity AI

Issue: Whether an AI-powered search and answer platform's alleged reproduction and summarization of news publishers' content without authorization gives rise to claims sounding in deceptive practices or unfair competition under applicable federal or state law.

Why It Matters: Insufficient text to determine the precise precedential impact, as the motion's arguments and the court's ruling (if any) are not included in the document; however, the case is notable as part of emerging litigation testing whether AI systems that ingest and repackage journalism can face civil liability under deceptive practices or unfair competition theories independent of copyright claims.

View on CourtListener →