Browse Cases

147 results
Clear
Section 230
Brief AI Liability Section 230 First Amendment Complaint

E.S. v. Character Technologies, Inc.

District Court, D. Colorado · 2025-09-15 · Character.AI (Character Technologies, Inc.)

Issue: Whether Character Technologies, Inc., its individual founders, and Google/Alphabet are strictly liable under product liability theories of design defect and failure to warn—and subject to additional tort, COPPA, and state consumer-protection claims—for physical and psychological injuries suffered by a minor user of the Character.AI generative AI platform.

Why It Matters: By affirmatively pleading that C.AI's outputs are the product of Defendants' own design choices rather than third-party content, the complaint is structured to foreclose a Section 230(c)(1) immunity defense from the outset, potentially advancing the theory that AI-generated outputs are first-party "products" subject to traditional tort liability rather than publisher immunity—a framing that, if accepted, could establish a significant precedent for imposing product liability on generative AI systems and their developers.

View on CourtListener →
AI Liability

Montoya v. Character Technologies, Inc.

District Court, D. Colorado · 3 filings
2025-09-15 · Complaint

Why It Matters: This case represents one of a growing wave of civil actions seeking to impose product liability and tort duties directly on AI platform developers and their corporate parents for harms allegedly caused by AI-generated interactions, and may advance the question of whether AI conversational systems constitute "products" subject to design defect and failure-to-warn theories under applicable state law.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: This complaint represents continued development of the AI chatbot liability landscape following Garcia's watershed holding that AI-generated outputs may not receive automatic First Amendment protection and that product liability claims can survive Section 230 motions when framed around architectural design rather than third-party content. The Colorado filing extends the geographic and judicial reach of these novel theories, potentially creating additional precedent on whether LLM-generated speech constitutes a "product" subject to traditional tort frameworks and whether platforms can invoke constitutional speech defenses at the pleading stage.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: The complaint's explicit pleading that C.AI's harmful outputs are the product of Defendants' own programming decisions—not third-party content—appears strategically crafted to foreclose a Section 230 defense, potentially advancing the theory that AI-generated outputs are manufacturer speech subject to product liability rather than platform-hosted user content.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Motion to Dismiss

Encyclopaedia Britannica, Inc. v. Perplexity AI, Inc.

District Court, S.D. New York · 2025-09-10 · Perplexity AI

Issue: Whether Perplexity AI's automated answer engine, which generates verbatim or near-verbatim reproductions of copyrighted content in response to user-directed queries, constitutes "volitional conduct" by Perplexity sufficient to support direct copyright infringement liability under 17 U.S.C. § 106, as governed by the Second Circuit's *Cablevision* volitional-conduct doctrine.

Why It Matters: This motion squarely presents to a federal court the question of whether the *Cablevision* volitional-conduct doctrine—developed in the context of automated cable DVR systems—extends to shield generative AI answer engines from direct copyright infringement liability when their outputs reproduce third-party copyrighted material at a user's explicit direction. The court's ruling could establish a significant precedent governing the allocation of direct infringement liability between AI platform operators and their users across the rapidly expanding universe of RAG-based generative AI products.

View on CourtListener →
Opinion Section 230 First Amendment Trial Court Opinion

Doe v. Discord, Inc.

District Court, N.D. Ohio · 2025-08-27 · Discord, Inc.

Issue: Doe v. Discord, Inc.* asks whether 47 U.S.C. § 230(c)(1) immunizes a social media platform from state-law claims arising from the sexual exploitation of a minor user, when the plaintiff frames those claims not merely as failures to moderate content but as independent product-design defects, failure-to-warn violations, and misrepresentations about platform safety. The question is sharpened by the plaintiff's deliberate pleading strategy of recasting monitoring-and-blocking duties under product-liability and tort labels — an approach that has survived § 230 challenges in some courts — and by Discord's specific marketing representations about user safety directed at minors and their families.

Why It Matters: This ruling reinforces § 230's breadth in the Sixth Circuit by applying the *Jones* framework with particular rigor to a child-safety fact pattern, directly rejecting the product-liability recharacterization strategy that plaintiffs in platform-harm litigation have increasingly deployed to escape immunity. The decision supplies the Northern District of Ohio's most detailed analysis of the *Barnes* promissory-estoppel exception, drawing an explicit line between aspirational corporate safety messaging — which cannot anchor a surviving misrepresentation claim — and specific, individualized promises that could. It also creates a meaningful doctrinal gap with the Ninth Circuit's *Lemmon v. Snap* line, which permits negligent-design claims to proceed when a platform feature is treated as the defendant's own expressive conduct rather than third-party content moderation, a tension the Sixth Circuit has not yet resolved. The with-prejudice dismissal signals that courts applying *Jones* are unlikely to permit iterative re-pleading aimed at constructing a § 230-surviving theory after the gravamen of the complaint targets moderation.

View on CourtListener →
Opinion Section 230 Motion to Dismiss

Angelilli v. Activision Blizzard, Inc., 2025 WL 1181000

N.D. Ill. · 2025-04-24 · Activision Blizzard (Call of Duty)

Issue: Whether § 230 bars claims that Activision's online gaming platform facilitated harassment and harmful conduct directed at plaintiff through features of its Call of Duty game and matchmaking system.

Why It Matters: Part of the emerging litigation testing the scope of § 230 in the online gaming context, where platform design choices about matchmaking, anonymity, and in-game communication systems intersect with severe harassment. Related to the companion decision in the same matter, 2025 WL 1184247.

View on CourtListener →
Opinion Section 230 Motion to Dismiss

Angelilli v. Activision Blizzard, Inc., 2025 WL 1184247

N.D. Ill. · 2025-04-24 · Activision Blizzard (Call of Duty)

Issue: Whether § 230 bars related claims arising from Activision's alleged failure to implement effective anti-harassment systems and safety features in the Call of Duty online platform, following the companion decision in 2025 WL 1181000.

Why It Matters: Companion to 2025 WL 1181000, together comprising the district court's full § 230 analysis of platform liability in the online gaming context. The pair of decisions addresses a relatively underexplored area of § 230 doctrine — the application of the statute to gaming platforms — and may be influential in subsequent litigation involving harassment on online multiplayer platforms.

View on CourtListener →
Filing Section 230 Opposition to Motion to Dismiss

Rosenblum v. Passes, Inc.

District Court, S.D. Florida · 2025-02-26 · Passes, Inc. (social media/content platform)

Issue: Whether Section 230 of the Communications Decency Act immunizes Passes, Inc. from liability for child sexual abuse material (CSAM) where plaintiff alleges the platform's agents actively solicited a minor to join the platform and then marketed and distributed the resulting CSAM.

Why It Matters: This case presents a potentially significant challenge to Section 230's scope in CSAM cases by alleging that platform agents' active recruitment and marketing of a minor creator transforms the platform from a passive host into a content developer or co-creator. If the material contribution theory survives the motion to dismiss, it could narrow Section 230 immunity for platforms whose employees or agents allegedly facilitate the creation or distribution of illegal content, particularly involving minors—extending the "content developer" exception beyond algorithmic design to direct human agency and solicitation.

View on CourtListener →
Filing First Amendment Section 230 Complaint

Trump Media & Technology Group Corp. v. De Moraes

District Court, M.D. Florida · 2025-02-18 · Rumble; Truth Social (Trump Media & Technology Group)

Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.

Why It Matters: This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.

View on CourtListener →
Section 230

Doe v. Grindr Inc.

Court of Appeals for the Ninth Circuit · 2 filings
2025-02-18 · Appellate Opinion

Why It Matters: Insufficient text to determine — this document is misfiled or misattributed and presents no holding, argument, or procedural development pertinent to Section 230 immunity, First Amendment platform doctrine, or civil liability for AI/ML systems.

View on CourtListener →
2025-01-01 · Appeal

Why It Matters: Represents the Ninth Circuit's return to the Grindr platform years after Herrick v. Grindr in the Second Circuit. The case tests the reach of the Lemmon design-defect doctrine in a non-speed-filter context — specifically, whether geolocation and identity features of a dating app constitute the platform's own product conduct. The interplay between this Ninth Circuit decision and Herrick in the Second Circuit reflects the ongoing circuit-level divergence on platform liability for design features that enable offline harm.

View on CourtListener →
Filing Section 230 Trial Court Opinion

Karam v. Meta Platforms, Inc.

District Court, N.D. California · 2025-02-12 · Meta (Facebook)

Issue: Whether Section 230 bars claims against Meta arising from the company's decision to ban or restrict plaintiff's Facebook account and its alleged failure to prevent other users from posting content about plaintiff.

Why It Matters: This decision reinforces the broad application of Section 230 immunity to platform account termination and content moderation decisions, extending publisher immunity not only to third-party content but also to the platform's own editorial decisions about which users may access its services. The ruling demonstrates courts' continued willingness to apply Section 230 at the motion to dismiss stage to bar claims challenging fundamental platform curation functions including account access decisions.

View on CourtListener →
Opinion Section 230 Appeal

M.P. v. Meta Platforms, Inc.

4th Cir. · 2025-01-01 · Meta (Instagram)

Issue: Whether § 230 bars claims that Meta's recommendation algorithms and design features facilitated the sexual exploitation of a minor by connecting the minor with an adult abuser on Instagram.

Why It Matters: An important Fourth Circuit decision on § 230 in the child sexual exploitation context, adding to the developing circuit-level body of law on whether design-defect theories and algorithm-based claims survive § 230 dismissal. The decision is significant for the wave of CSAM and child exploitation litigation against social media platforms pending in multiple circuits.

View on CourtListener →
Opinion Section 230 Appeal

Patterson v. Meta Platforms, Inc.

N.Y. App. Div. · 2025-01-01 · Meta (Facebook/Instagram)

Issue: Whether New York state law claims against Meta arising from the platform's design and content recommendation features are preempted by § 230(e)(3) or otherwise barred as publisher-based liability.

Why It Matters: An important state-court application of § 230 preemption doctrine and the design-defect framework. The New York Appellate Division's analysis contributes to the growing body of state appellate authority on § 230 preemption and is significant for ongoing multi-district litigation against Meta in both state and federal courts.

View on CourtListener →
AI Liability

A.F., on behalf of J.F. v. CHARACTER TECHNOLOGIES, INC.

District Court, E.D. Texas · 3 filings
2024-12-09 · Complaint

Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.

View on CourtListener →
2024-12-09 · Complaint

Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.

View on CourtListener →
2024-12-09 · Complaint

Why It Matters: Filed as an exhibit rather than an opinion, this document supplies factual predicate for design-defect and failure-to-warn claims against an AI chatbot platform, potentially advancing the question of whether AI systems that generate harmful interactive content — and the companies that deploy them — can be held liable under traditional products liability frameworks when those systems foreseeably expose minors to sexual exploitation.

View on CourtListener →
Other Filing Section 230 Other

Amy v. Apple Inc

District Court, N.D. California · 2024-12-07 · Apple Inc.

Issue: In *Amy v. Apple Inc.*, Apple argues that Section 230 of the Communications Decency Act categorically bars the plaintiffs' claims against it as an app store intermediary. The question is whether a freshly decided Ninth Circuit ruling on social media platform immunity—*Doe 1 v. Meta Platforms, Inc.*—extends to Apple's distinct role as an app store gatekeeper that distributes third-party applications rather than hosting user-generated content in the traditional sense. That factual difference is not trivial, because courts have not uniformly agreed that app stores qualify for the same Section 230 treatment as content-hosting platforms.

Why It Matters: Apple is signaling to the court that a brand-new Ninth Circuit decision supports dismissing this case under the federal internet immunity statute, but it is not explaining why—a gap that matters because *Doe 1 v. Meta* arose in a social media context and Apple operates as an app store, a meaningfully different kind of intermediary. Whether that distinction defeats the analogy is a genuinely open doctrinal question: courts have not consistently agreed that app stores qualify as interactive computer services entitled to publisher-function immunity, and no binding Ninth Circuit authority has cleanly resolved that issue. This filing is therefore less a dispositive move than a pressure point—it forces plaintiffs to either distinguish the new ruling or concede its application, and it flags an ongoing fault line in Section 230 doctrine over how far immunity extends beyond platforms that host user content to those that simply distribute access to third-party applications.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Other

Garcia v. Character Technologies, Inc.

District Court, M.D. Florida · 2024-10-22 · Character Technologies, Inc. (Character.AI)

Issue: Whether Character Technologies, Inc., its individual founders, and Google LLC are strictly liable under design defect and failure-to-warn theories, and liable in negligence, negligence per se, and for violations of Florida's Deceptive and Unfair Trade Practices Act (Fla. Stat. Ann. § 501.204), for the wrongful death of a 14-year-old minor allegedly caused by the defective design and marketing of the Character.AI generative AI chatbot product.

Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.

View on CourtListener →