Browse Cases

143 results
Clear
First Amendment
Brief First Amendment Other

NetChoice v. Jason S. Miyares

District Court, E.D. Virginia · 2025-11-17 · Social media platforms (represented collectively by NetChoice trade association)

Issue: In *NetChoice v. Miyares*, Virginia's Attorney General argues that a federal district court improperly blocked enforcement of Virginia SB 854 — a law imposing default daily time limits on minors' social media use that parents can override — without first performing the application-by-application analysis that the Supreme Court's 2024 decision in *Moody v. NetChoice* requires before a law can be enjoined on its face. The brief also presses two substantive questions: whether SB 854's exclusion of platforms offering news, sports, and entertainment content is a content-neutral functional distinction or a subject-matter carveout that triggers heightened scrutiny, and whether a parental-override time limit survives intermediate scrutiny as a narrowly tailored child-protection measure.

Why It Matters: A wave of near-identical state laws restricting minors' access to social media is simultaneously moving through federal courts in Florida, Texas, and elsewhere, making the procedural and substantive arguments here broadly consequential. If the Fourth Circuit stays the injunction on *Moody* procedural grounds, it will signal to district courts nationwide that facial First Amendment challenges to platform-regulation statutes must clear a significantly higher bar before any injunction issues — a development that would reshape litigation strategy in dozens of pending cases. The content-neutrality argument carries equally high stakes: if a statute that facially names "news, sports, and entertainment" in its definitional exclusions can nonetheless be characterized as a neutral functional distinction, states gain a workable template for drafting minor-protection laws that avoid strict scrutiny. The brief's success or failure will also clarify how far *Free Speech Coalition v. Paxton*'s intermediate-scrutiny reasoning extends beyond age-verification-for-explicit-content contexts into the time-limit-with-parental-override framework Virginia has chosen.

View on CourtListener →
Exhibit First Amendment Other

Meta Platforms, Inc. v. Bonta

District Court, N.D. California · 2025-11-13 · Meta Platforms, Inc.

Issue: Whether social media platform defendants (Meta, TikTok, Snap, and Google/YouTube) are entitled to summary judgment on school districts' negligence, failure-to-warn, and public nuisance claims arising from the platforms' design features and algorithmic systems alleged to cause adolescent addiction and mental health harm.

Why It Matters: The California AG's use of the MDL summary judgment record as evidence in the *Bonta* preliminary injunction proceeding signals that state regulators are actively leveraging private litigation findings to resist platform efforts to enjoin state enforcement, potentially reinforcing the evidentiary foundation for state-level regulation of platform design and youth safety obligations.

View on CourtListener →
Brief First Amendment Section 230 Complaint

Amazon.com Services LLC v. Perplexity AI, Inc.

District Court, N.D. California · 2025-11-04 · Perplexity AI (AI search engine / generative AI platform)

Issue: Insufficient text to determine — the summons identifies Amazon.com Services LLC as plaintiff and Perplexity AI, Inc. as defendant but does not disclose the specific legal claims, statutes, or theories of liability asserted in the underlying complaint.

Why It Matters: Insufficient text to determine — the summons alone reveals only the identity of the parties and the forum, not the legal theories that would bear on platform liability, First Amendment doctrine, or AI regulation.

View on CourtListener →
First Amendment

Computer & Communications Industry Association v. Paxton

District Court, W.D. Texas · 3 filings
Amicus Brief Amicus Brief
2025-10-16 · Other

Why It Matters: The brief advances two arguments worth watching across the broader wave of child online safety litigation. First, the conduct-regulation framing — that age-gating requirements target platform business practices rather than expressive content — is the central legal lever that could determine whether strict scrutiny applies at all; if it succeeds, it substantially lowers the bar for states defending these statutes. Second, the brief surfaces a genuinely open doctrinal question that *Moody v. NetChoice* (2024) has made more acute: whether laws that in practice restrict which apps minors can access implicate platform editorial discretion regardless of how neutrally they are drafted, a tension the brief does not address. The credibility of the "disinterested scholars" posture is also contestable given Thayer's drafting role, and opposing counsel should be expected to press that point in any response.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This amici brief advances a content-neutrality framework specifically designed to distinguish SB 2420 from statutes invalidated in *NetChoice v. Griffin* and *Brown v. Entertainment Merchants Association*, potentially offering courts a doctrinal path to uphold app-store child-safety regulations by classifying gatekeeping and contracting functions as commercial conduct rather than protected editorial discretion — a distinction that, if accepted, could broadly affect the constitutional viability of similar legislation in other states.

View on CourtListener →
2025-10-16 · Other

Why It Matters: This brief illustrates how states are attempting to circumvent First Amendment platform-autonomy challenges by framing minor-protective legislation as commercial contract regulation rather than speech regulation, a theory that—if accepted—could substantially limit the reach of *Moody v. NetChoice* in the context of app store transactions and AI product liability for minors.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Complaint

D.A v. Roblox Corporation

District Court, N.D. California · 2025-10-16 · Roblox Corporation

Issue: Insufficient text to determine.

Why It Matters: Insufficient text to determine. --- Note: The document transmitted consists solely of 109 repeated docket-page citations with no substantive content rendered. To generate an accurate summary, please resubmit with the actual text of the complaint.*

View on CourtListener →
Brief Section 230 First Amendment Complaint

Doe v. Roblox Corporation

District Court, E.D. Arkansas · 2025-10-07 · Roblox Corporation

Issue: Whether Roblox Corporation and Discord, Inc. are liable under product liability (design defect), negligence, and fraud theories for injuries a minor suffered from sexual exploitation facilitated through their platforms, and whether those claims are barred by §230(c)(1) of the Communications Decency Act.

Why It Matters: This complaint presents a direct test of whether product liability and fraud theories premised on platform design choices — rather than on Defendants' role as publishers of third-party content — can survive anticipated §230 preemption arguments, potentially advancing the circuit split over whether design-defect claims targeting a platform's own architectural decisions fall outside §230's immunity.

View on CourtListener →
Brief Section 230 First Amendment Other

IN RE: Roblox Corporation Child Sexual Exploitation and Assault Litigation

United States Judicial Panel on Multidistrict Litigation · 2025-09-18 · Roblox Corporation, Discord Inc., TikTok (ByteDance)

Issue: In *In re Roblox Corporation Child Sexual Exploitation and Assault Litigation*, Plaintiff Jaimee Seitz argues that her claims — arising from her child's fatal self-harm following grooming on Roblox and Discord — share sufficient common questions of fact with MDL No. 3166 to warrant transfer under 28 U.S.C. § 1407, even though the MDL was constituted around sexual exploitation and assault rather than coerced self-harm. The question is whether platform-level design defects and child-safety failures can serve as the unifying factual predicate for consolidation when the downstream harms across the MDL docket differ categorically in type.

Why It Matters: This filing tests whether the JPML will treat a platform's alleged safety-design failures as an outcome-agnostic consolidation anchor — a theory that, if accepted, could draw a broader category of technology-facilitated child harm cases into MDL proceedings that were constituted around sexual exploitation specifically. The brief's most contested move is its dismissal of Section 230 differentiation: the FOSTA-SESTA carve-out from § 230 immunity is available to most MDL No. 3166 plaintiffs but categorically inapplicable to Seitz, meaning the § 230 pretrial framework already developed in the MDL may not translate cleanly to her claims. If the Panel credits Defendants' taxonomy — distinguishing sexual exploitation from violent or extremist content facilitation — it could signal a meaningful limit on how broadly platform-identity can unify factually adjacent but legally divergent cases within a single MDL proceeding.

View on CourtListener →
AI Liability

P.J. v. Character Technologies, Inc.

District Court, N.D. New York · 4 filings
2025-09-16 · Other

Why It Matters: As part of the multi-district Character.AI litigation wave, this case contributes to the developing body of law on whether AI chatbot platforms face product liability and negligence exposure for harmful outputs to minors, and whether Section 230 and First Amendment defenses can shield AI developers from such claims — directly implicating the high-priority Garcia questions about AI-as-product and the constitutional status of AI-generated speech.

View on CourtListener →
2025-09-16 · Complaint

Why It Matters: This case is part of the emerging wave of AI chatbot product liability litigation testing whether traditional tort frameworks apply to conversational AI systems and their outputs. Along with Garcia and the Colorado Peralta case, it will help establish whether AI-generated content is treated as protected speech immunizing developers from liability, whether Section 230 applies to AI-generated outputs, and what duty of care AI developers owe to vulnerable user populations like minors.

View on CourtListener →
2025-09-16 · Complaint

Why It Matters: This case is significant because it extends the wave of product liability litigation targeting AI companion chatbots to a new federal district, naming both the AI developer and major technology investors/parent entities, which could advance questions about the scope of upstream developer and platform liability for AI-generated content causing harm to minors.

View on CourtListener →
2025-09-16 · Complaint

Why It Matters: The complaint's explicit allegation that C.AI is a "product" whose harmful outputs are attributable solely to Defendants' own design choices—not third-party content—represents a deliberate pleading strategy to circumvent Section 230 immunity and to frame AI-generated outputs as actionable product defects, potentially advancing the theory that generative AI chatbots are subject to traditional products liability doctrine in a way that could set precedent for how courts classify and regulate AI systems.

View on CourtListener →
AI Liability

Montoya v. Character Technologies, Inc.

District Court, D. Colorado · 6 filings
2025-09-15 · Complaint

Why It Matters: This case is part of a multi-district wave of AI chatbot liability litigation against Character.AI that is actively developing the law on whether AI-generated conversational output triggers product liability exposure, whether Section 230 shields AI developers from design-defect claims, and whether the First Amendment protects AI chatbot outputs from tort liability — all three of the highest-priority open questions tracked by this newsletter as of early 2026. A second Colorado filing against Character.AI (Peralta) is already in the canonical corpus, making this case a direct parallel to track for any doctrinal divergence between districts or judges.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: As a second Character.AI case filed in the District of Colorado (alongside Peralta), Montoya contributes to the developing multi-district litigation landscape around AI chatbot liability and may implicate consolidation, coordinated briefing, or bellwether status on the core questions left open after Garcia — particularly whether AI chatbot platforms are "products" subject to products liability doctrine, whether Section 230 bars design-defect claims targeting the platform's own architectural choices, and whether AI-generated outputs constitute First Amendment-protected speech at the pleading stage.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: As part of the expanding Character.AI litigation wave, this case contributes to the developing body of law on whether AI chatbot platforms face tort liability for harmful outputs — directly implicating the unresolved questions of whether Section 230 immunizes AI-generated content and whether the First Amendment protects such output from liability, questions identified as highest-priority tracking areas under Step 5.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: As part of the rapidly expanding litigation against Character.AI across multiple federal districts, this case is significant for tracking how district courts outside the Middle District of Florida handle product liability, negligence, and Section 230 defenses in AI chatbot harm cases — and whether the Garcia framework (allowing design defect and failure-to-warn claims to survive at the pleading stage) is adopted, modified, or rejected in other jurisdictions. A second filing in the District of Colorado (alongside Peralta) may also signal plaintiff-side forum strategy and affect consolidation or bellwether dynamics in this litigation.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: This case is part of the expanding wave of Character.AI wrongful death litigation and directly implicates the high-priority questions under Step 5 — specifically, whether AI chatbot platforms can be held liable as "products" under design-defect and failure-to-warn theories, and whether Section 230 or the First Amendment bars such claims at the pleading stage. The addition of Alphabet/Google as defendants may raise novel questions about investor or parent-company liability in AI tort litigation, and the Colorado forum creates another potential circuit-level data point distinct from the Middle District of Florida's Garcia ruling.

View on CourtListener →
2025-09-15 · Complaint

Why It Matters: This complaint expands the geographic and jurisdictional scope of AI chatbot product liability litigation against Character.AI, potentially developing a body of district court precedent on whether AI conversational systems constitute "products" subject to traditional tort liability and whether Section 230 or First Amendment defenses bar such claims. The D. Colorado venue may produce independent analysis on the Garcia framework, particularly on whether AI-generated outputs qualify as protected speech at the motion-to-dismiss stage and whether design-defect theories survive Section 230 immunity arguments.

View on CourtListener →
Brief AI Liability Section 230 First Amendment Other

E.S. v. Character Technologies, Inc.

District Court, D. Colorado · 2025-09-15 · Character.AI (Character Technologies, Inc.)

Issue: Whether the court should stay proceedings in a product liability and negligence action against an AI chatbot developer — premised on design defect, failure to warn, and related tort theories — pending resolution of potentially dispositive threshold issues, including likely Section 230 immunity and First Amendment defenses.

Why It Matters: Insufficient text to determine the precise legal arguments advanced, but the motion signals that defendants in AI chatbot liability cases are pursuing early procedural mechanisms — such as stays — to forestall merits litigation, a tactic that may reflect a broader defense strategy of prioritizing threshold immunity questions (e.g., §230, First Amendment) before engaging costly discovery in AI tort suits.

View on CourtListener →