Browse Cases

216 results
First Amendment

Rosado v. Bondi

District Court, N.D. Illinois · 3 filings
2026-02-11 · Other

Why It Matters: The language the court ultimately selects will determine whether government officials can continue the kinds of informal, off-the-record pressure on social media and app platforms that have become routine tools of regulatory influence — making this order-drafting dispute substantively significant despite its procedural form. The competing proposals crystallize two genuinely different readings of *Vullo*: one treating the Supreme Court's multi-verb coercion framework as directly operative, the other reading *Murthy*'s more cautious tone as a narrowing gloss, despite *Murthy* having been resolved on standing grounds without reaching the merits. Whichever order the court adopts is likely to serve as a template — or a foil — for injunctions in future government-platform coercion cases, and the unresolved interaction between *Vullo* and *Murthy* on this precise drafting question is one that courts across the country will eventually have to confront.

View on CourtListener →
2026-02-11 · Other

Why It Matters: The motion itself has no bearing on the merits of the underlying First Amendment coercion claims, but it signals that defendants may be positioning for appellate review of the preliminary injunction — a development that could significantly delay the case if the Solicitor General authorizes an appeal. The court's ruling will reveal how much deference it is willing to extend to the government's preferred litigation pace at this early stage. Defendants' reliance on *Clinton v. Jones* is also worth watching: that decision is more accurately a refusal to grant a stay than an endorsement of one, meaning plaintiffs can deploy the same citation in opposition, and how the court reads it may foreshadow its broader approach to managing this case.

View on CourtListener →
2026-02-11 · Preliminary Injunction

Why It Matters: This ruling gives content creators and publishers a concrete legal framework for challenging government pressure campaigns against social media platforms — a form of censorship that has been notoriously difficult to litigate because plaintiffs typically cannot prove a platform removed content *because of* the government rather than for its own independent reasons. The court's three-part convergence test — prior platform approval, swift removal following government contact, and officials publicly claiming credit — transforms an abstract constitutional protection into a workable standing roadmap for future jawboning plaintiffs. The ruling is nonetheless vulnerable on appeal: it sits in direct tension with the Supreme Court's causation skepticism in *Murthy v. Missouri* (2024), and the Seventh Circuit may require more granular, plaintiff-specific proof of coercion than this court's convergence framework demands. Critical questions also remain open, including the precise scope of the forthcoming injunction order and whether official public statements urging platform action constitute protected government speech rather than actionable coercion.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Thayer v. Doximity, Inc.

District Court, N.D. California · 2026-02-09 · Doximity, Inc.

Issue: In *Thayer v. Doximity, Inc.*, Doximity argues that displaying a non-registered physician's publicly available credentials in an unclaimed professional profile cannot constitute misappropriation of name or likeness — under either California common law or Cal. Civ. Code § 3344 — because the use is incidental rather than prominent, and because a non-registered user's profile is structurally excluded from the platform's revenue stream. The motion also asks whether Section 230(c)(1) independently immunizes a platform that assembles such profiles from third-party-sourced data, even when that assembly serves a commercially motivated subscription model.

Why It Matters: This motion asks a federal court to decide, before any discovery, whether companies that build products around aggregated professional identities can use the incidental-use doctrine and Section 230 to foreclose right-of-publicity and unjust enrichment claims at the pleading stage — effectively insulating the commercial architecture of their platforms from factual scrutiny. The Section 230 argument is particularly consequential: if Hon. Thompson rejects it even in passing, that ruling would add to a developing body of law on whether identity-as-product business models are distinguishable from passive hosting for immunity purposes. The treatment of incidental use as a pure legal question carries its own stakes, since resolving it at 12(b)(6) prevents plaintiffs from conducting discovery into how a platform actually attributes revenue to unregistered profiles — an issue that will matter to every professional-network operator running similar unclaimed-profile features.

View on CourtListener →
Brief First Amendment Section 230 Complaint

Netchoice v. Wilson

District Court, D. South Carolina · 2026-02-09 · NetChoice (trade association representing social media platforms and internet companies)

Issue: Whether the South Carolina Age-Appropriate Code Design Act's requirements that covered online services exercise "reasonable care" to prevent harms to minors, disable certain engagement and discovery features, screen third-party advertising, and submit to third-party audits violate the First Amendment's prohibitions on content-based speech restrictions and compelled speech, are preempted by §230(c)(1) of the Communications Decency Act and COPPA, and violate the Commerce Clause and Due Process Clause.

Why It Matters: This complaint extends a growing line of coordinated First Amendment challenges by NetChoice to state-level online minor-protection laws, directly invoking *Moody v. NetChoice* and Fourth Circuit precedent to argue that platform curation and algorithmic editorial judgment are categorically protected expression, which, if adopted by the court, would significantly constrain states' ability to regulate platform design features affecting speech.

View on CourtListener →
Brief First Amendment Other

NEWSGUARD TECHNOLOGIES v. FEDERAL TRADE COMMISSION

District Court, District of Columbia · 2026-02-06 · NewsGuard Technologies, Inc. (news rating/brand safety service)

Issue: In *NewsGuard Technologies v. FTC*, NewsGuard argues that the FTC's voluntary withdrawal of a Civil Investigative Demand did not moot its First Amendment and APA claims because the agency simultaneously obtained consent decrees in a separate antitrust proceeding that condition major advertising-agency mergers on prohibitions against using NewsGuard's services. The non-obvious dimension is that the alleged suppression did not occur through a direct regulatory order targeting NewsGuard — it occurred through merger approval conditions negotiated with large corporate third parties who had independent counsel and agreed to the terms. NewsGuard contends this amounts to the same unconstitutional government coercion of private actors to silence a disfavored editorial voice, only now packaged inside a judicially approved antitrust settlement.

Why It Matters: This case sits at an unusual intersection of antitrust enforcement, First Amendment press freedom, and administrative law, and the core constitutional question it raises has broad implications: whether the federal government can effectively blacklist a journalistic organization from its market by embedding speech-adjacent conditions inside merger consent decrees, insulating that pressure from First Amendment scrutiny through the procedural form of a negotiated antitrust settlement. The most doctrinally significant move in this filing is the attempt to extend *Vullo*'s jawboning framework to consent decrees negotiated in arms-length antitrust proceedings — a novel application that existing precedent neither clearly supports nor forecloses. If a court ultimately accepts NewsGuard's framing, it could significantly constrain the government's ability to include speech-adjacent conditions in antitrust settlements going forward, affecting how merger review is conducted whenever the target industry touches the flow of information or advertising.

View on CourtListener →
Exhibit AI Liability Section 230 First Amendment Amended Complaint

DOE v. X.AI Corp.

District Court, N.D. California · 2026-01-23 · xAI Corp. / xAI LLC (Grok)

Issue: In *Doe v. X.AI Corp.*, plaintiffs argue that xAI Corp. and xAI LLC are strictly liable, negligent, and federally liable for designing and distributing Grok — a generative AI model — with deliberately disabled safety controls that made production of non-consensual sexualized deepfake imagery, including of minors, a foreseeable and commercially exploited outcome. The case raises the non-obvious question of whether a generative AI developer that markets permissive safety defaults as a feature, and actively disseminates model outputs through its own accounts, can claim the neutral-tool protections that have historically shielded platforms from liability for third-party content.

Why It Matters: This complaint is worth watching because it simultaneously deploys three distinct strategies to avoid Section 230 immunity against a generative AI defendant — each pressing a genuinely open question in current law. The "active producer" framing, which treats xAI's own dissemination of Grok outputs as content creation rather than tool provision, tests the outer boundary of the information content provider carve-out in a novel AI context. The product design theory — targeting the model's default-permissive architecture rather than any specific user-generated output — follows the approach that divided courts in *Lemmon v. Snap* and related cases, and could force courts to decide for the first time whether a large image-generation model is a "product" subject to risk-utility balancing or a "service" governed only by negligence. The § 1595 sex trafficking theory applied to AI-generated synthetic imagery with no human trafficking victim is legally untested, and a ruling on that claim's viability under FOSTA-SESTA's carve-out would have broad implications for how federal sex trafficking law applies to generative AI systems.

View on CourtListener →
AI Liability

St. Clair v. X.AI Holdings Corp.

District Court, S.D. New York · 3 filings
2026-01-15 · Complaint

Why It Matters: This complaint is an early test of whether product liability doctrine—rather than Section 230 or First Amendment defenses—can be applied directly to an AI image-generation system, framing the chatbot itself as a defective product whose foreseeable output is nonconsensual intimate imagery; if courts allow strict liability claims to proceed on this theory, it could establish a significant avenue for AI developer liability that sidesteps traditional platform immunity arguments.

View on CourtListener →
2026-01-15 · Opposition to Motion for Summary Judgment

Why It Matters: This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.

View on CourtListener →
2026-01-15 · Motion for Temporary Restraining Order

Why It Matters: This motion directly tests whether Section 230 immunity extends to content affirmatively generated by an AI system — as opposed to merely hosted third-party content — a question with broad implications for AI developer liability; if the court accepts plaintiff's framing that AI-generated output constitutes the developer's own content, it could establish a significant precedent foreclosing Section 230 as a defense for generative AI systems and accelerating civil liability exposure for AI developers under existing tort and statutory frameworks.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Welkin v. Meta Platforms, Inc.

District Court, N.D. Georgia · 2026-01-12 · Meta Platforms, Inc. (Facebook)

Issue: Whether §230(c) of the Communications Decency Act immunizes Meta from an IIED claim and request for injunctive relief arising from Meta's alleged failure to remove a third-party Facebook impersonation profile whose content Iranian authorities reportedly used as evidence in criminal proceedings against the plaintiff's mother.

Why It Matters: The motion squarely tests whether §230(c) shields a platform from tort liability and injunctive relief when a plaintiff alleges harm flowing not from the platform's affirmative conduct but from its editorial decision to only partially remove third-party content flagged as an impersonation account, potentially reinforcing the breadth of publisher immunity for content-moderation decisions short of complete removal.

View on CourtListener →
First Amendment

Mayday Health v. Jackley

District Court, S.D. New York · 2 filings
2026-01-06 · Other

Why It Matters: The case advances the "jawboning" doctrine by testing the limits of state attorney general authority to use cease-and-desist letters and retaliatory enforcement actions to suppress politically disfavored but constitutionally protected online speech, and it raises a significant question about whether *Younger* abstention can shield such proceedings from federal judicial review when the proceedings are allegedly pretextual.

View on CourtListener →
2026-01-06 · Complaint

Why It Matters: The case tests whether a state attorney general may use a consumer-protection enforcement threat as a mechanism to suppress a noncommercial publisher's truthful speech about out-of-state legal services — squarely implicating *Bigelow v. Virginia*'s protection for cross-border reproductive-health information — while also presenting a notable pleading-stage invocation of § 230(c)(1) as a shield against liability predicated on a website's hyperlinks to third-party content, potentially advancing the question of how § 230 interacts with state regulatory (rather than private civil) actions targeting a platform's linking choices.

View on CourtListener →
Opinion Section 230 First Amendment Appellate Opinion

SNAP, INC. v. THE EIGHTH JUDICIAL DISTRICT COURT OF THE STATE

Nev: Supreme Court · 2026 · Snap, Inc. (Snapchat)

Issue: Whether Section 230 of the Communications Decency Act bars the State of Nevada's claims under the Nevada Deceptive Trade Practices Act (NDTPA), and whether the First Amendment precludes the State's negligence claim against Snapchat.

Why It Matters: This decision represents a significant development in the intersection of Section 230 immunity, First Amendment protection, and state enforcement actions against social media platforms. The court's conclusion that negligence claims can proceed despite First Amendment concerns, while consumer protection claims remain Section 230-barred, suggests courts may be creating new pathways for platform liability through traditional tort theories that avoid Section 230's broad publisher immunity shield—particularly relevant given the Garcia v. Character.AI framework for product liability claims against technology platforms.

View on CourtListener →
Brief Section 230 Motion to Dismiss

Ridley v. Sweepsteaks Ltd.

District Court, E.D. Virginia · 2025-12-31 · Kick Streaming Pty Ltd.

Issue: In *Ridley v. Sweepsteaks Ltd.*, defendant Kick Streaming Pty Ltd. argues that an Australian livestreaming company cannot be haled into a Virginia court on the basis that its platform is globally accessible, that Section 230 of the Communications Decency Act immunizes it from liability for promotional content created and broadcast by third-party celebrity streamers, and that RICO and Virginia Consumer Protection Act claims fail where no predicate act or misrepresentation is specifically attributable to Kick. The non-obvious tension is whether a platform that allegedly structured and funded eight-figure contracts with U.S. celebrities for the express purpose of directing American audiences to a gambling site is a passive host at all — or something closer to a co-architect of the promotional scheme.

Why It Matters: Kick's motion presents one of the clearest judicial tests yet of whether a streaming platform that pays celebrities to advertise a specific third-party service crosses from passive host into co-developer of commercial deception — a question that would strip Section 230 immunity under the *Roommates.com* material-contribution framework but remains unresolved in the Fourth Circuit. The personal jurisdiction argument also raises an unsettled question about how *Walden*'s defendant-focused purposeful-availment analysis applies when a platform's commercial targeting of U.S. consumers is executed through third-party human agents rather than the platform's own direct contacts. If a court finds the passive-host analogy inapt on these facts, this case could become a vehicle for the Fourth Circuit to address paid promotional contracting as a Section 230 immunity disqualifier — a development with significant consequences for influencer-driven marketing across major streaming platforms.

View on CourtListener →
AI Liability

DOE v. OPENAI, LP

District Court, District of Columbia · 2 filings
2025-12-30 · Other

Why It Matters: Insufficient text to determine. --- Note: The document submitted contains only page-header metadata (case number, document number, and page citations for all 28 pages of Document 10 in Case 1:25-cv-04564) but no actual text content from the filing. None of the substantive allegations, arguments, rulings, or procedural history are visible in the provided excerpt. A complete and accurate summary cannot be prepared without the underlying text.*

View on CourtListener →
2025-12-30 · Complaint

Why It Matters: The complaint is a pro se filing asserting legally extraordinary claims — including a mathematically derived infringement probability of 10⁻⁴⁵ and the assertion that informal written descriptions of broad AI concepts constitute copyrightable expression sufficient to support trillion-dollar damages — and it is unlikely to survive threshold screening under Rule 12 or the copyright originality standard of *Feist Publications*; however, it illustrates a growing category of pro se litigation attempting to impose intellectual property and RICO liability on AI developers for the architecture of large language models, a question courts have not yet resolved on the merits.

View on CourtListener →
AI Liability

Emily Lyons v. OpenAi Foundation

District Court, N.D. California · 2 filings
2025-12-29 · Other

Why It Matters: This filing is among the first to test whether a major AI company can be held liable under a product-defect theory — rather than a content-moderation theory — for catastrophic harm caused by how a large language model was architecturally designed. Plaintiff's framing is legally deliberate: by targeting GPT-4o's memory and mirroring features as the defective instrumentality, she is structured to thread past § 230 using the same platform's-own-conduct carve-out that allowed negligent-design claims to survive in *Lemmon v. Snap*. Defendants' § 230 defense may face those same headwinds, since § 230 has repeatedly been held not to reach claims where the platform's own design — not third-party content — is the alleged proximate cause. The psychotherapy-licensing theory and the question of whether strict products liability under *Greenman* extends to AI services at all remain entirely open, with no controlling authority, and will likely define the first major pleadings battle in this case.

View on CourtListener →
2025-12-29 · Motion to Dismiss

Why It Matters: This motion presents an early procedural test of whether federal courts will decline jurisdiction over AI product liability suits in favor of consolidating such claims in state court mass-tort coordination proceedings, potentially channeling the emerging wave of ChatGPT-related personal injury litigation into California's JCCP framework rather than federal court; the outcome may also signal how courts will manage the proliferation of parallel AI liability actions filed by different plaintiffs arising from the same underlying AI-assisted harm.

View on CourtListener →
Brief First Amendment AI Liability Complaint

X.AI LLC v. Rob Bonta

District Court, C.D. California · 2025-12-29 · X.AI (xAI Corp., operator of Grok AI system)

Issue: Whether California Assembly Bill 2013's mandatory public disclosure requirements compelling AI developers to reveal training dataset sources, descriptions, and data-point counts violate the First Amendment's prohibition on compelled speech, the Takings Clause's just-compensation requirement, and the void-for-vagueness doctrine as applied to xAI's proprietary generative AI training data.

Why It Matters: This complaint presents a direct First Amendment challenge to a state government's attempt to regulate AI transparency through mandatory disclosure of proprietary training data, potentially setting precedent on whether compelled disclosure regimes targeting AI development methods receive strict or intermediate scrutiny. The case also tests the outer boundary of trade-secret property rights as against state AI accountability legislation, a question no circuit court has yet resolved.

View on CourtListener →