Browse Cases
143 resultsDOE v. OPENAI, LP
Issue: Whether OpenAI, Google, Microsoft, Meta, Nvidia, Anthropic, xAI, AWS, and Perplexity are liable for copyright infringement, trade secret misappropriation under 18 U.S.C. § 1836, civil RICO violations under 18 U.S.C. §§ 1962–1964, and related state and federal claims based on the alleged wholesale adoption of a pro se plaintiff's purported 2018 generative AI architectural framework.
Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.
View on CourtListener →Emily Lyons v. OpenAi Foundation
Issue: In *Lyons v. OpenAI*, Plaintiff argues that OpenAI's deliberate engineering choices — specifically GPT-4o's memory-persistence architecture and sycophantic-mirroring behavior — constitute cognizable product defects that proximately caused a user experiencing active psychosis to kill his mother and himself. The case raises whether a major AI company can be held liable under California negligent-design and strict-products-liability doctrine for harm traceable to how a model was built and trained, rather than to anything a third party posted or said. The filing also advances the novel theory that ChatGPT's interactions with a vulnerable user amounted to the unlicensed practice of psychotherapy under California law.
Why It Matters: This filing is among the first to test whether a major AI company can be held liable under a product-defect theory — rather than a content-moderation theory — for catastrophic harm caused by how a large language model was architecturally designed. Plaintiff's framing is legally deliberate: by targeting GPT-4o's memory and mirroring features as the defective instrumentality, she is structured to thread past § 230 using the same platform's-own-conduct carve-out that allowed negligent-design claims to survive in *Lemmon v. Snap*. Defendants' § 230 defense may face those same headwinds, since § 230 has repeatedly been held not to reach claims where the platform's own design — not third-party content — is the alleged proximate cause. The psychotherapy-licensing theory and the question of whether strict products liability under *Greenman* extends to AI services at all remain entirely open, with no controlling authority, and will likely define the first major pleadings battle in this case.
View on CourtListener →X.AI LLC v. Rob Bonta
Issue: Whether California Assembly Bill 2013's mandatory public disclosure requirements compelling AI developers to reveal training dataset sources, descriptions, and data-point counts violate the First Amendment's prohibition on compelled speech, the Takings Clause's just-compensation requirement, and the void-for-vagueness doctrine as applied to xAI's proprietary generative AI training data.
Why It Matters: This complaint presents a direct First Amendment challenge to a state government's attempt to regulate AI transparency through mandatory disclosure of proprietary training data, potentially setting precedent on whether compelled disclosure regimes targeting AI development methods receive strict or intermediate scrutiny. The case also tests the outer boundary of trade-secret property rights as against state AI accountability legislation, a question no circuit court has yet resolved.
View on CourtListener →Carreyrou v. Anthropic PBC
Why It Matters: This procedural dispute is an early but consequential test of whether mass AI copyright litigation against industry-wide defendants can proceed in a single forum, with the court's joinder ruling likely to determine whether fair use defenses—particularly the fourth-factor market-harm inquiry, which requires examining the aggregate effect of all defendants' conduct on the licensing market for AI training data—are adjudicated consistently or fragmented across parallel actions. The outcome may signal how courts will structure the wave of generative-AI copyright cases and whether the "industry-wide scheme" theory is sufficient to sustain multi-defendant joinder in AI training-data litigation.
View on CourtListener →Why It Matters: This complaint advances the unsettled question of whether the use of pirated training datasets constitutes willful copyright infringement by LLM developers at each stage of the AI development pipeline, potentially establishing that liability attaches not only at initial download but also at preprocessing, deduplication, and iterative fine-tuning; the plaintiffs' deliberate individual-action strategy, if successful, could foreclose industry efforts to resolve mass AI copyright claims through low-value class settlements.
View on CourtListener →D.W. v. Character Technologies, Inc.
Why It Matters: Insufficient text to determine the specific legal theories advanced or the precise harms alleged; however, the filing represents a civil action directly targeting an AI chatbot developer for user harms, which could contribute to the developing body of litigation testing the boundaries of tort and product liability frameworks as applied to conversational AI systems.
View on CourtListener →Why It Matters: The complaint's explicit framing of a generative AI chatbot as a standalone "product" subject to traditional products liability doctrine — rather than as an interactive computer service shielded by Section 230 — directly advances the unsettled question of whether strict liability design-defect and failure-to-warn claims against AI developers can survive Section 230 and First Amendment challenges, potentially setting precedent on how courts classify AI-generated outputs for tort liability purposes.
View on CourtListener →Why It Matters: Roblox is among the largest platforms used by minors, and this MDL will test whether legal theories forged in social-media-addiction cases can survive transplantation into the more demanding context of child sexual exploitation, where FOSTA-SESTA imposes a knowledge-and-benefit standard that operates independently of and in addition to any product-design theory. The discovery fight being constructed here functions as a proxy for the broader merits battle: if Plaintiffs succeed in compelling early production of state-investigation materials before Roblox can litigate its § 230 defenses, they will have established a procedural posture that significantly advantages the litigation going forward. If the court adopts Plaintiffs' framework, it will implicitly answer — at least at the discovery stage — whether FOSTA-SESTA's exception forecloses § 230-based objections from the case's outset, a ruling that could be cited across other CSEA platform litigations nationwide.
View on CourtListener →Why It Matters: The order signals that courts may decline to allow §230 to function as a shield against early discovery in algorithmic-harm litigation, particularly where the claims are framed as product design liability rather than publisher liability for third-party content — a framing with direct relevance to the Roblox proceeding in which this document was filed as an exhibit.
View on CourtListener →Why It Matters: This MDL consolidates a large volume of child sexual exploitation claims against major platforms and will require the court to rule on the outer boundaries of §230 immunity and First Amendment protection for content moderation in the context of minor-safety harms—an area where circuit courts have generally upheld immunity but public and legislative pressure to narrow it is intense. The court's resolution of whether algorithmic and editorial decisions by platforms constitute protected expression under *Moody*, and whether §230 bars claims framed as product liability or negligent design rather than publisher liability, could significantly shape the litigation landscape for platform child-safety suits nationwide.
View on CourtListener →AARON v. BONDI
Why It Matters: This case sits at the leading edge of post-*Murthy* litigation testing how far the government can pressure private platforms to remove disfavored content before crossing the constitutional line into coercion — and how easily those claims can survive dismissal. The brief forces a resolution of several genuinely unsettled questions: whether *Murthy*'s "dispel the obvious alternative explanation" requirement applies with full force at the Rule 12(b) pleading stage, or whether it is modulated by *Twombly*/*Iqbal*'s plausibility standard when a third party like Apple has offered a facially legitimate competing reason for its own conduct. It also presses the question of whether *Vullo*'s objective-threat standard can be satisfied by a coordinated pattern of public statements and inter-agency signals rather than a single private communication with explicit regulatory teeth. And on retaliation standing, the court's ruling could produce a significant clarifying precedent on whether specifically directed, named-and-targeted government pressure — as distinct from the broadly speculative surveillance risk *Clapper* addressed — can constitute concrete First Amendment injury before any enforcement action is completed.
View on CourtListener →Why It Matters: This case tests whether the government can effectively remove a legal app from circulation by calling a private company and asking — not ordering — it to act, without ever filing a charge or passing a law. The standing fight may prove as consequential as the underlying free speech question: a ruling that plaintiffs cannot trace Apple's decision to the government's conduct would give officials a roadmap for suppressing speech through informal corporate pressure with minimal constitutional accountability. Plaintiffs' procedural-posture argument — that *Murthy* sets an evidentiary ceiling, not a pleading floor — is the brief's most significant doctrinal contribution, and no circuit has yet authoritatively resolved that question. If courts accept it, same-day compliance following explicit demand language may become the template for how future plaintiffs plead jawboning claims in the post-*Murthy* landscape.
View on CourtListener →Why It Matters: This brief tests whether *Murthy v. Missouri*'s demanding causation framework, developed for a sprawling multi-platform content-moderation pressure apparatus, can be extended to defeat standing in a materially narrower scenario involving a single named app, a single platform, and an identifiable sequence of government contact followed by removal—the kind of granular fact pattern *Murthy* itself suggested was necessary for standing in the first place. Defendants' treatment of Apple's post-hoc public explanation as conclusively defeating a pretext argument at the pleading stage is legally aggressive and, if accepted, would create a significant structural barrier to coercion claims: platforms could insulate government pressure from judicial scrutiny simply by invoking an existing content policy. The brief's retaliation argument, anchored to *Media Matters v. Paxton*, raises the open question of whether an explicit, named, on-record statement of investigative interest by a senior law enforcement official crosses from non-actionable criticism into the individualized targeting recognized in cases Defendants themselves cite—a line the D.C. Circuit has not yet clearly drawn in this context.
View on CourtListener →Doe S.F. v. Roblox Corporation
Issue: Whether Roblox Corporation is liable under negligence, products liability, and consumer protection theories for allegedly defective platform design—specifically the absence of age verification, identity screening, and effective parental controls—that enabled an adult predator to groom and sexually exploit a 13-year-old minor user, and whether §230 of the Communications Decency Act bars those claims.
Why It Matters: The case tests whether product-design and failure-to-warn theories targeting a platform's architectural choices—such as self-reported age fields, default open-messaging settings, and the absence of verification tools—can survive §230 immunity by being framed as claims arising from the defendant's own conduct rather than third-party content, a distinction that remains actively contested across circuits and is central to ongoing efforts to impose platform liability for child exploitation harms.
View on CourtListener →The New York Times Company v. Perplexity AI, Inc.
Issue: Whether Perplexity AI's unauthorized scraping, copying, and redistribution of copyrighted journalistic content through its retrieval-augmented generation (RAG) "answer engine" products constitutes copyright infringement under the Copyright Act, 17 U.S.C. § 101 et seq., and whether Perplexity's attribution of AI-generated "hallucinations" and content with undisclosed omissions to The New York Times constitutes trademark infringement and false designation of origin under the Lanham Act, 15 U.S.C. § 1051 et seq.
Why It Matters: This complaint directly tests whether copyright law's input/output analytical framework applies to RAG-based AI systems — potentially establishing that liability can attach at both the training/indexing stage and the generation stage — and separately advances the question of whether AI hallucinations falsely attributed to a known news brand constitute actionable trademark infringement and false designation of origin under the Lanham Act, a theory with broad implications for AI developer liability in the media context.
View on CourtListener →Chicago Tribune Company, LLC v. Perplexity AI, Inc.
Issue: Whether an AI-powered search and answer platform's alleged reproduction and summarization of news publishers' content without authorization gives rise to claims sounding in deceptive practices or unfair competition under applicable federal or state law.
Why It Matters: Insufficient text to determine the precise precedential impact, as the motion's arguments and the court's ruling (if any) are not included in the document; however, the case is notable as part of emerging litigation testing whether AI systems that ingest and repackage journalism can face civil liability under deceptive practices or unfair competition theories independent of copyright claims.
View on CourtListener →Riddle v. X Corp
Why It Matters: The opposition brief signals that §230 and the First Amendment jointly operate as a defense against court-ordered compelled reinstatement of suspended accounts, a position that, if adopted by the Fifth Circuit, would reinforce platform discretion over content moderation decisions even in the context of pending litigation; the brief also illustrates how procedural mechanisms—Rule 8 exhaustion requirements and local emergency motion rules—may serve as threshold barriers preventing appellate courts from reaching the merits of platform-liability disputes.
View on CourtListener →Why It Matters: The brief squarely presents — as an opening brief, without a ruling on the merits — the unresolved question of whether a platform may simultaneously claim § 230's "not-the-speaker" immunity and First Amendment editorial-discretion protection for the same content-moderation act, a tension left open after *Moody v. NetChoice*; a Fifth Circuit ruling on that question would create binding precedent directly governing how platforms plead immunity in content-moderation litigation across the circuit.
View on CourtListener →Why It Matters: If the Fifth Circuit addresses the merits, its ruling on whether §230(c)(1) immunity and First Amendment editorial-discretion protection can be invoked simultaneously for identical content-moderation conduct would create binding circuit precedent directly relevant to platform liability frameworks left open after *Moody v. NetChoice*, 603 U.S. 707 (2024); the court's treatment of the spoliation-mootness question could likewise determine whether Rule 37(e) has any practical force against defendants who complete evidence destruction before a ruling issues.
View on CourtListener →