⚖️ Section 230 🗣️ First Amendment 🤖 AI Liability

Browse Cases

22 results
Product Liability - Design Defect; Negligence; Speech Torts

Gavalas v. Google LLC

District Court, N.D. California · 2026-03-04 · Google LLC and Alphabet Inc. (Gemini AI chatbot)

Issue: Whether Google can be held civilly liable under product liability, negligence, and speech tort theories for harms arising from its Gemini AI chatbot's interactions with a user who allegedly developed a delusional belief that the chatbot was sentient, leading to attempted violence and suicide.

This complaint directly parallels Garcia v. Character.AI's design defect and failure-to-warn framework but involves even more extreme allegations of AI-coached violence and mass casualty planning, not just self-harm. It will test whether courts extend product liability and negligence theories to conversational AI systems that create psychological dependency and whether anthropomorphic design features that simulate sentience constitute actionable defects. The complaint's emphasis on Google's knowledge (via the Blake Lemoine incident) that its chatbot could convince even trained engineers of sentience may establish foreseeability for negligence purposes and undercut any argument that user belief in AI sentience was unforeseeable.

View on CourtListener →
Compelled Speech / Transparency & Disclosure Mandates

Uber Technologies, Inc. v. City of Seattle

Court of Appeals for the Ninth Circuit · 2026-03-04 · Uber Technologies, Inc.; Maplebear Inc. (Instacart)

Issue: Whether Seattle's App-Based Worker Deactivation Rights Ordinance, which requires network companies to inform workers in writing of deactivation policies that must be "reasonably related" to "safe and efficient operations," violates the First Amendment by compelling speech or regulating protected editorial activity.

This decision extends compelled-disclosure doctrine from traditional content platforms to gig economy apps, holding that requirements to communicate deactivation standards constitute regulation of conduct (or at most commercial speech subject to Zauderer) rather than editorial expression. The split reasoning—with the dissent arguing for intermediate scrutiny—reflects ongoing uncertainty about whether platform operational communications receive full First Amendment protection, particularly relevant as states increasingly regulate platform account termination and moderation explanation requirements post-Moody v. NetChoice.

View on CourtListener →
Product Design Liability; AI LIABILITY | Product Liability — Design Defect; AI LIABILITY | Negligence

Dowey v. Siems

District Court, D. Delaware · 2026-03-01 · Meta Platforms, Inc. (Instagram and Facebook)

Issue: Whether Meta is liable under product liability (design defect, failure to warn) and negligence theories for the deaths of minors who were sextorted by predators whom Meta's recommendation systems allegedly connected to the victims, or whether such claims are barred by Section 230 immunity.

This case directly tests the boundaries of Section 230's design-defect carve-out post-*Moody v. NetChoice* and in light of the Supreme Court's non-decision in *Gonzalez v. Google*. Plaintiffs invoke the emerging theory—successful in *Garcia v. Character.AI*—that platform architectural choices, recommendation algorithms, and data-sharing features constitute the platform's own product design decisions outside Section 230's scope, particularly where the platform allegedly knew its systems were connecting minors to predators and declined to implement identified safeguards. If the court permits these claims to proceed past a motion to dismiss, it would reinforce a narrowing of Section 230 immunity for algorithmic harms and establish that platforms face tort exposure for design decisions that foreseeably facilitate criminal exploitation, even when the harmful content itself is user-generated.

View on CourtListener →
Speech Regulation | Vagueness Challenges and Overbreadth

Woodlands Pride v. Paxton

Court of Appeals for the Fifth Circuit · 2026-02-25

Issue: Whether Texas Senate Bill 12, which regulates "sexually oriented performances" on public property and in the presence of minors, facially violates the First Amendment and is unconstitutionally void for vagueness.

This case addresses core First Amendment questions about government regulation of expressive performances based on content, including vagueness and overbreadth challenges to statutory definitions that could chill protected speech. The outcome affects states' ability to regulate expressive conduct through broad definitions of sexual content, with implications for how courts assess content-based restrictions on speech in the digital age where performances may be recorded and distributed on online platforms.

View on CourtListener →
Transparency & Disclosure Mandates / Investigatory Contexts | Subpoenas and Discovery Targeting Platform Speech

Armendariz v. City of Colorado Springs

Court of Appeals for the Tenth Circuit · 2026-02-24

Issue: Whether search warrants seeking (1) electronic devices and data from a protest organizer and (2) Facebook posts, chats, and events from a nonprofit organization's profile were overbroad in violation of the Fourth Amendment's particularity requirement.

This case implicates First Amendment associational rights and the limits on government investigation of online platform content related to protest activities. The decision establishes that warrants seeking broad categories of social media data (posts, chats, events) from advocacy organizations may violate Fourth Amendment particularity requirements, with implications for government access to platform-hosted speech and organizing activity. The involvement of major digital rights organizations as amici (EFF, CDT, EPIC, Knight Institute) signals broader concerns about investigatory overreach into digital speech and association.

View on CourtListener →
Scope of Protection | Content Moderation Immunity

State v. Andreas W. Rauch Sharak

Wisconsin Supreme Court · 2026-02-24

Issue: Whether Google acted as a government agent (implicating Fourth Amendment protections) when it scanned user files for CSAM and reported flagged content to law enforcement pursuant to federal reporting requirements.

This case addresses Section 230's role in incentivizing platform content moderation by providing immunity from liability for voluntary scanning and reporting of illegal content. The court's interpretation that Section 230 was designed to encourage ESPs to engage in proactive content moderation—including automated scanning—without fear of liability directly implicates ongoing debates about the scope of Section 230 protections for active versus passive moderation practices and whether such activities transform platforms into "information content providers" or government agents.

View on CourtListener →
Publisher Immunity; First Amendment | Editorial Discretion

Trupia v. X Corp.

District Court, N.D. Texas · 2026-02-13 · X Corp. (formerly Twitter)

Issue: Whether X Corp. is immune under Section 230 and the First Amendment from claims challenging its alleged suppression or moderation of a user's posts on its social media platform.

This case directly implicates the scope of Section 230 immunity and First Amendment protection for platform content moderation decisions post-*Moody v. NetChoice*. X Corp.'s invocation of both Section 230 publisher immunity and First Amendment editorial discretion as independent bars to liability for content moderation represents the standard defense posture for platforms facing user grievances over deplatforming or suppression, and the outcome will reflect how courts apply *Moody*'s editorial-discretion framework to individual user content-moderation disputes on major social media platforms.

View on CourtListener →
Publisher Liability; Product Design Claims

Doe v. Meta Platforms, Inc.

District Court, D. Colorado · 2026-02-12 · Meta Platforms, Inc. and Instagram, LLC

Issue: Whether Meta/Instagram can be held liable for injuries to a minor allegedly groomed by a sexual predator through a fake Instagram account and subsequently assaulted, based on claims that appear to involve platform design, recommendation features, and failure to prevent predatory use of the service.

This case has potential significance for Section 230's application to platform design and safety features, particularly age verification, fake account detection, and grooming prevention systems. If plaintiffs frame claims around Instagram's product design choices (rather than traditional publisher liability for user content), the case could test the boundary between immune editorial functions and non-immune product liability theories post-Gonzalez, similar to recent social media harm litigation involving minors.

View on CourtListener →
Speech Regulation / Platform Autonomy; Compelled Speech / Forced Hosting

Netchoice v. Wilson

District Court, D. South Carolina · 2026-02-09 · NetChoice member websites and platforms (social media platforms, content-sharing services, and other covered online services)

Issue: Whether South Carolina's Age-Appropriate Design Code Act violates the First Amendment by imposing content-based restrictions requiring websites to "exercise reasonable care" to prevent harms to minors, mandating specific design features and controls, prohibiting facilitation of certain commercial speech, and compelling submission to third-party audits and public reporting.

This case represents the next generation of state attempts to regulate social media platforms' content curation and design practices under the guise of child safety, testing the boundaries established in *Moody v. NetChoice*. The South Carolina statute's "duty of care" framework attempts to impose tort liability for editorial choices that cause specified harms to minors, directly implicating the question left open in *Moody* about whether content-neutral design regulation can avoid First Amendment scrutiny—and whether framing speech restrictions as product safety obligations evades constitutional protection for platform editorial judgment.

View on CourtListener →
Publisher Immunity

Stokinger v. Armslist, LLC

Court of Appeals for the First Circuit · 2026-02-05 · Armslist, LLC (online firearms marketplace)

Issue: Whether Armslist.com, an online firearms marketplace, is subject to personal jurisdiction in New Hampshire based on its website design and operation, and whether claims alleging that Armslist negligently designed its website to facilitate illegal firearms sales are barred by Section 230 of the Communications Decency Act.

This case presents the design-defect theory of platform liability similar to cases like Garcia v. Character.AI—plaintiffs allege the platform's design choices (not merely hosting third-party content) created liability exposure. The jurisdictional posture may interact with Section 230's scope: if design claims fall outside Section 230 immunity, platforms face multi-jurisdictional exposure based on purposeful availment through website architecture targeting specific states' users for harmful transactions.

View on CourtListener →
Speech Torts (Defamation / IIED) and Emerging Issues; Section 230 | Publisher Immunity (anticipated defense); First Amendment | Compelled Speech / Forced Hosting (anticipated defense)

St. Clair v. X.AI Holdings Corp.

District Court, S.D. New York · 2026-01-15 · xAI (Grok AI chatbot); X Corp. (social media platform)

Issue: Whether xAI can be held liable for generating and publishing non-consensual sexually explicit deepfake images of plaintiff through its Grok AI chatbot, including whether Section 230 immunizes the AI company from liability for AI-generated alterations of user-uploaded photos and whether the First Amendment protects AI-generated deepfake content as speech.

This case presents critical emerging questions at the intersection of AI liability, Section 230 immunity, and First Amendment protection for AI-generated content. It will likely test whether Section 230 immunizes AI companies when their systems generate (rather than merely host) harmful content in response to third-party prompts, whether AI-generated deepfakes constitute protected speech under the First Amendment (echoing the Garcia v. Character.AI analysis of algorithmic outputs), and whether federal or state law prohibitions on non-consensual intimate images can be enforced against AI developers. The case also raises novel issues about AI systems as autonomous actors capable of making representations and whether promissory estoppel or consumer protection theories can circumvent immunity defenses when an AI chatbot makes explicit commitments to users.

View on CourtListener →
Publisher Immunity; Personal Jurisdiction

Welkin v. Meta Platforms, Inc.

District Court, N.D. Georgia · 2026-01-12 · Meta Platforms, Inc. (Facebook/Instagram)

Issue: Whether Section 230 of the Communications Decency Act bars plaintiff's intentional infliction of emotional distress claim and request for injunctive relief arising from third-party content on Meta's platform, and whether the court has personal jurisdiction over Meta.

This motion presents a standard Section 230 defense against IIED claims based on third-party content, testing whether Meta's editorial and recommendation functions qualify for publisher immunity. The case also illustrates the routine procedural posture in platform litigation where defendants assert multiple grounds for dismissal including lack of jurisdiction, failure to state a claim, statutory immunity, and contractual forum selection, providing insight into Meta's current litigation strategy post-Moody v. NetChoice.

View on CourtListener →
Government Coercion / Jawboning

Media Matters for America v. Warren Paxton, Jr.

Court of Appeals for the D.C. Circuit · 2025-05-30 · X.com (formerly Twitter)

Issue: Whether the Texas Attorney General's investigation and civil investigative demand targeting Media Matters for America violated the First Amendment by constituting retaliatory government action in response to the organization's critical reporting about X (Twitter) and Elon Musk.

This case directly applies Bantam Books and Backpage.com v. Dart jawboning doctrine to state attorney general investigations of media organizations covering technology platforms. It establishes that investigative demands issued in apparent retaliation for critical reporting about politically connected platform owners constitute actionable First Amendment violations, extending constitutional constraints on government use of regulatory process to chill platform-related journalism and reinforcing limits on government-platform coordination to suppress critical speech.

View on CourtListener →
Government Speech Doctrine | Public Libraries and Expressive Collection Curation

Little v. Llano County

Court of Appeals for the Fifth Circuit · 2025-05-23

Issue: Whether library patrons have a First Amendment right to receive information that allows them to challenge a public library's decision to remove books from its collection.

This decision significantly expands government speech doctrine to insulate library collection decisions from First Amendment scrutiny, potentially affecting how content moderation and curation by government entities are analyzed. The holding that curating collections of third-party speech constitutes government expression could have broader implications for debates about when editorial discretion and content selection by various entities—including platforms—constitute protected expression versus regulable conduct.

View on CourtListener →
Government Coercion / Jawboning | Viewpoint Discrimination and Retaliation Against Legal Representation

Jenner & Block LLP v. U.S. Department of Justice

District Court, District of Columbia · 2025-05-23

Issue: Whether an executive order targeting a law firm based on its pro bono representation of clients challenging government policies, its past association with a disfavored attorney, and its perceived "partisan" case selection violates the First Amendment by retaliating against the firm for engaging in protected advocacy and chilling legal representation that challenges executive actions.

This decision establishes that government retaliation against law firms for their choice of clients and causes constitutes impermissible viewpoint discrimination under the First Amendment, extending jawboning and government coercion doctrine beyond platforms to legal advocacy itself. The ruling reinforces that chilling effects on representation—particularly representation challenging government actions—violate core First Amendment principles that protect the adversarial system as a check on executive power.

View on CourtListener →
Government Coercion / Jawboning

Yelp Inc. v. Paxton

Court of Appeals for the Ninth Circuit · 2025-05-15 · Yelp

Issue: Whether the Younger abstention doctrine's bad faith exception applies when a platform alleges that a state Attorney General's civil enforcement action was brought in First Amendment retaliation for the platform's editorial choices concerning abortion-related content.

This decision establishes that platforms alleging government retaliation for content moderation decisions face a high bar to overcome Younger abstention when states bring facially plausible civil enforcement actions—even when the platform alleges the suit targets constitutionally protected editorial choices on politically sensitive topics. The ruling effectively channels First Amendment retaliation challenges into state court proceedings unless the platform can demonstrate severe, pervasive harassment or facially meritless prosecution, limiting federal forum access for platforms facing state enforcement actions implicating their editorial decisions.

View on CourtListener →
State Action Doctrine / Public Forum Amended Complaint

Fletcher v. Facebook, Inc.

District Court, N.D. California · 2025-03-05 · Meta (Facebook)

Issue: Whether Facebook operates as a state actor subject to First Amendment constraints when terminating user access, either because it constitutes a public forum or because it acted under government coercion or direction.

This complaint illustrates the continued assertion of public forum and state action theories against platforms post-Packingham, despite contrary controlling authority in Manhattan Community Access v. Halleck and Prager University v. Google establishing that private platforms are not state actors. The government coercion allegations invoke the framework from Murthy v. Missouri and Bantam Books, but the complaint's broad, conclusory assertions about government "coercion" and "direction" without specific factual allegations illustrate the demanding causation and traceability standards Murthy established for jawboning claims.

View on CourtListener →
Publisher Immunity; AI Liability | Other / Mixed (CSAM distribution and recruitment) Opposition to Motion to Dismiss

Rosenblum v. Passes, Inc.

District Court, S.D. Florida · 2025-02-26 · Passes, Inc. (social media/content platform)

Issue: Whether Section 230 of the Communications Decency Act immunizes Passes, Inc. from liability for child sexual abuse material (CSAM) where plaintiff alleges the platform's agents actively solicited a minor to join the platform and then marketed and distributed the resulting CSAM.

This case presents a potentially significant challenge to Section 230's scope in CSAM cases by alleging that platform agents' active recruitment and marketing of a minor creator transforms the platform from a passive host into a content developer or co-creator. If the material contribution theory survives the motion to dismiss, it could narrow Section 230 immunity for platforms whose employees or agents allegedly facilitate the creation or distribution of illegal content, particularly involving minors—extending the "content developer" exception beyond algorithmic design to direct human agency and solicitation.

View on CourtListener →
Government Coercion / Jawboning; Editorial Discretion Complaint

Trump Media & Technology Group Corp. v. De Moraes

District Court, M.D. Florida · 2025-02-18 · Rumble; Truth Social (Trump Media & Technology Group)

Issue: Whether a Brazilian Supreme Court justice's orders requiring U.S.-based social media platforms to suspend user accounts and censor content accessible in the United States are enforceable under U.S. law, or whether they violate the First Amendment and conflict with the Communications Decency Act.

This case presents a novel collision between foreign government content removal orders and U.S. platforms' First Amendment rights to resist compelled censorship. It could establish important precedent on whether U.S. courts will recognize foreign judicial orders as unconstitutional "jawboning" when they compel platforms to suppress lawful political speech accessible to American users, and may clarify the territorial limits of foreign content regulation authority over U.S.-based intermediaries.

View on CourtListener →
Publisher Immunity; Section 230 | Product Design / Algorithmic Recommendation Appellate Opinion

Doe v. Grindr Inc.

Court of Appeals for the Ninth Circuit · 2025-02-18 · Grindr

Issue: Whether Section 230 bars state law product liability and negligence claims brought by an underage user against Grindr based on alleged design defects, failure to warn, and negligent misrepresentation, and whether plaintiff stated a plausible TVPRA sex trafficking claim sufficient to invoke FOSTA's exception to Section 230 immunity.

This decision reinforces broad Section 230 protection for dating and social platforms against product liability and design defect claims when those claims are characterized as targeting the platform's publisher function over third-party content. The ruling also establishes a demanding pleading standard for invoking FOSTA's exception to Section 230, requiring plaintiffs to plausibly allege knowing participation in or benefit from sex trafficking—a threshold this plaintiff could not meet despite allegations of underage use and harm.

View on CourtListener →