Browse Cases
143 resultsE.S. v. Character Technologies, Inc.
Why It Matters: Attached as a pleading exhibit rather than a judicial opinion, this report is notable as evidentiary support for civil claims against an AI chatbot developer based on the platform's own generative outputs — not third-party user content — potentially distinguishing it from standard Section 230 immunity arguments and advancing the theory that AI-generated harmful content targeting minors constitutes independently actionable conduct by the developer.
View on CourtListener →Why It Matters: By affirmatively pleading that C.AI's outputs are the product of Defendants' own design choices rather than third-party content, the complaint is structured to foreclose a Section 230(c)(1) immunity defense from the outset, potentially advancing the theory that AI-generated outputs are first-party "products" subject to traditional tort liability rather than publisher immunity—a framing that, if accepted, could establish a significant precedent for imposing product liability on generative AI systems and their developers.
View on CourtListener →Montoya v. Character Technologies, Inc.
Why It Matters: This case represents one of a growing wave of civil actions seeking to impose product liability and tort duties directly on AI platform developers and their corporate parents for harms allegedly caused by AI-generated interactions, and may advance the question of whether AI conversational systems constitute "products" subject to design defect and failure-to-warn theories under applicable state law.
View on CourtListener →Why It Matters: This complaint represents continued development of the AI chatbot liability landscape following Garcia's watershed holding that AI-generated outputs may not receive automatic First Amendment protection and that product liability claims can survive Section 230 motions when framed around architectural design rather than third-party content. The Colorado filing extends the geographic and judicial reach of these novel theories, potentially creating additional precedent on whether LLM-generated speech constitutes a "product" subject to traditional tort frameworks and whether platforms can invoke constitutional speech defenses at the pleading stage.
View on CourtListener →Why It Matters: The complaint's explicit pleading that C.AI's harmful outputs are the product of Defendants' own programming decisions—not third-party content—appears strategically crafted to foreclose a Section 230 defense, potentially advancing the theory that AI-generated outputs are manufacturer speech subject to product liability rather than platform-hosted user content.
View on CourtListener →PENSKE MEDIA CORPORATION v. GOOGLE LLC
Issue: Whether Google's conditioning of search indexing and SERP placement on publishers' involuntary supply of content for AI Overviews, Featured Snippets, and LLM training constitutes unlawful reciprocal dealing, monopoly maintenance, and unlawful tying in violation of Sections 1 and 2 of the Sherman Act, 15 U.S.C. §§ 1–2.
Why It Matters: This complaint directly tests whether antitrust law — rather than copyright or Section 230 — can constrain a dominant platform's use of third-party content to power generative AI products, potentially establishing that coerced content licensing through monopoly search distribution is actionable under the Sherman Act and setting a framework for evaluating AI training and inference as anticompetitive leveraging conduct.
View on CourtListener →Encyclopaedia Britannica, Inc. v. Perplexity AI, Inc.
Issue: Whether Perplexity AI's automated answer engine, which generates verbatim or near-verbatim reproductions of copyrighted content in response to user-directed queries, constitutes "volitional conduct" by Perplexity sufficient to support direct copyright infringement liability under 17 U.S.C. § 106, as governed by the Second Circuit's *Cablevision* volitional-conduct doctrine.
Why It Matters: This motion squarely presents to a federal court the question of whether the *Cablevision* volitional-conduct doctrine—developed in the context of automated cable DVR systems—extends to shield generative AI answer engines from direct copyright infringement liability when their outputs reproduce third-party copyrighted material at a user's explicit direction. The court's ruling could establish a significant precedent governing the allocation of direct infringement liability between AI platform operators and their users across the rapidly expanding universe of RAG-based generative AI products.
View on CourtListener →Doe v. Discord, Inc.
Issue: Doe v. Discord, Inc.* asks whether 47 U.S.C. § 230(c)(1) immunizes a social media platform from state-law claims arising from the sexual exploitation of a minor user, when the plaintiff frames those claims not merely as failures to moderate content but as independent product-design defects, failure-to-warn violations, and misrepresentations about platform safety. The question is sharpened by the plaintiff's deliberate pleading strategy of recasting monitoring-and-blocking duties under product-liability and tort labels — an approach that has survived § 230 challenges in some courts — and by Discord's specific marketing representations about user safety directed at minors and their families.
Why It Matters: This ruling reinforces § 230's breadth in the Sixth Circuit by applying the *Jones* framework with particular rigor to a child-safety fact pattern, directly rejecting the product-liability recharacterization strategy that plaintiffs in platform-harm litigation have increasingly deployed to escape immunity. The decision supplies the Northern District of Ohio's most detailed analysis of the *Barnes* promissory-estoppel exception, drawing an explicit line between aspirational corporate safety messaging — which cannot anchor a surviving misrepresentation claim — and specific, individualized promises that could. It also creates a meaningful doctrinal gap with the Ninth Circuit's *Lemmon v. Snap* line, which permits negligent-design claims to proceed when a platform feature is treated as the defendant's own expressive conduct rather than third-party content moderation, a tension the Sixth Circuit has not yet resolved. The with-prejudice dismissal signals that courts applying *Jones* are unlikely to permit iterative re-pleading aimed at constructing a § 230-surviving theory after the gravamen of the complaint targets moderation.
View on CourtListener →Glass, Lewis & Co., LLC v. Paxton
Issue: Whether the preliminary injunction enjoining the Texas Attorney General from "taking any action to enforce S.B. 2337" against Glass Lewis also bars enforcement of a Civil Investigative Demand issued under § 17.61 of the Texas Deceptive Trade Practices and Consumer Protection Act, a separate pre-existing consumer-protection statute.
Why It Matters: The motion tests the boundary between a targeted First Amendment injunction against a specific statute and a government agency's parallel investigative authority under a separate, long-standing consumer-protection law, with implications for how narrowly courts will construe injunctions restraining state enforcement actions against speakers such as proxy advisors.
View on CourtListener →NetChoice v. Ellison
Issue: Whether Minnesota's proposed statutory restrictions on social media platform design features — including algorithmic amplification, engagement-based optimization, and "deceptive patterns" targeting minors — violate the First Amendment's prohibitions on compelled speech and forced hosting of third-party content.
Why It Matters: The report is significant as an exhibit because it reveals the state's own regulatory theory — that platform liability should attach to *design functions* rather than *content* — a distinction the AG explicitly frames as the constitutionally safer path in light of prior court decisions striking down content-based online speech laws, and which NetChoice is apparently contesting as insufficient to avoid First Amendment scrutiny.
View on CourtListener →Media Matters for America v. Warren Paxton, Jr.
Issue: Whether the Texas Attorney General's investigation and civil investigative demand targeting Media Matters for America violated the First Amendment by constituting retaliatory government action in response to the organization's critical reporting about X (Twitter) and Elon Musk.
Why It Matters: This case directly applies Bantam Books and Backpage.com v. Dart jawboning doctrine to state attorney general investigations of media organizations covering technology platforms. It establishes that investigative demands issued in apparent retaliation for critical reporting about politically connected platform owners constitute actionable First Amendment violations, extending constitutional constraints on government use of regulatory process to chill platform-related journalism and reinforcing limits on government-platform coordination to suppress critical speech.
View on CourtListener →Little v. Llano County
Issue: Insufficient text to determine. (This document is a New York state criminal appeal concerning a guilty plea, waiver of appeal rights, and suppression hearing forfeiture — it bears no relationship to the labeled case *Little v. Llano County* or to First Amendment law, Section 230, or AI/ML civil liability.)
Why It Matters: Insufficient text to determine. This decision addresses New York criminal procedure — specifically the validity of appeal waivers and suppression hearing forfeiture rules — and contains no analysis relevant to platform liability, First Amendment doctrine as applied to technology or public institutions, Section 230, or AI/ML regulation.
View on CourtListener →Fletcher v. Facebook, Inc.
Issue: Whether Facebook operates as a state actor subject to First Amendment constraints when terminating user access, either because it constitutes a public forum or because it acted under government coercion or direction.
Why It Matters: This complaint illustrates the continued assertion of public forum and state action theories against platforms post-Packingham, despite contrary controlling authority in Manhattan Community Access v. Halleck and Prager University v. Google establishing that private platforms are not state actors. The government coercion allegations invoke the framework from Murthy v. Missouri and Bantam Books, but the complaint's broad, conclusory assertions about government "coercion" and "direction" without specific factual allegations illustrate the demanding causation and traceability standards Murthy established for jawboning claims.
View on CourtListener →Trump Media & Technology Group Corp. v. De Moraes
Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.
Why It Matters: This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.
View on CourtListener →Students Engaged in Advancing Texas v. Ken Paxton, Attorney General, State of Texas
Issue: Whether Texas HB18, a state law regulating social media platforms' content moderation and targeted advertising practices directed at minors, violates the First Amendment and is preempted by Section 230.
Why It Matters: This appeal presents a post-Moody test case for state regulation of social media platforms' treatment of minors and targeted advertising practices. The Fifth Circuit's resolution will clarify how Moody's framework for evaluating must-carry and content moderation mandates applies to age-based restrictions and commercial speech regulations, and whether Section 230 preempts state laws targeting platform design features and advertising practices rather than third-party content liability.
View on CourtListener →Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.
View on CourtListener →Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.
View on CourtListener →Garcia v. Character Technologies, Inc.
Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.
View on CourtListener →Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.
View on CourtListener →