Browse Cases
216 resultsM.P. v. Meta Platforms, Inc.
Issue: Whether § 230 bars claims that Meta's recommendation algorithms and design features facilitated the sexual exploitation of a minor by connecting the minor with an adult abuser on Instagram.
Why It Matters: An important Fourth Circuit decision on § 230 in the child sexual exploitation context, adding to the developing circuit-level body of law on whether design-defect theories and algorithm-based claims survive § 230 dismissal. The decision is significant for the wave of CSAM and child exploitation litigation against social media platforms pending in multiple circuits.
View on CourtListener →Patterson v. Meta Platforms, Inc.
Issue: Whether New York state law claims against Meta arising from the platform's design and content recommendation features are preempted by § 230(e)(3) or otherwise barred as publisher-based liability.
Why It Matters: An important state-court application of § 230 preemption doctrine and the design-defect framework. The New York Appellate Division's analysis contributes to the growing body of state appellate authority on § 230 preemption and is significant for ongoing multi-district litigation against Meta in both state and federal courts.
View on CourtListener →Why It Matters: This exhibit directly advances the question of whether AI-generated content that is sexually explicit and directed at a minor — produced autonomously by a large language model without direct human authorship — can ground product liability or speech tort claims against the developer, a question with significant implications for how courts will categorize AI outputs (as "speech" protected or immunized, or as a defective product) and for the scope of Section 230 immunity in cases involving AI-generated rather than third-party content.
View on CourtListener →Why It Matters: This exhibit is significant because it provides direct documentary evidence that Character.AI's system both generated child-directed sexual content and possessed an internal moderation mechanism that identified the content as violative yet failed to halt generation — a factual record that could simultaneously support design defect claims (the safeguard was inadequate) and undermine any argument that harmful outputs were unforeseeable, potentially limiting the scope of any §230 defense the platform might raise.
View on CourtListener →Why It Matters: Filed as an exhibit rather than an opinion, this document supplies factual predicate for design-defect and failure-to-warn claims against an AI chatbot platform, potentially advancing the question of whether AI systems that generate harmful interactive content — and the companies that deploy them — can be held liable under traditional products liability frameworks when those systems foreseeably expose minors to sexual exploitation.
View on CourtListener →Amy v. Apple Inc
Issue: In *Amy v. Apple Inc.*, Apple argues that Section 230 of the Communications Decency Act categorically bars the plaintiffs' claims against it as an app store intermediary. The question is whether a freshly decided Ninth Circuit ruling on social media platform immunity—*Doe 1 v. Meta Platforms, Inc.*—extends to Apple's distinct role as an app store gatekeeper that distributes third-party applications rather than hosting user-generated content in the traditional sense. That factual difference is not trivial, because courts have not uniformly agreed that app stores qualify for the same Section 230 treatment as content-hosting platforms.
Why It Matters: Apple is signaling to the court that a brand-new Ninth Circuit decision supports dismissing this case under the federal internet immunity statute, but it is not explaining why—a gap that matters because *Doe 1 v. Meta* arose in a social media context and Apple operates as an app store, a meaningfully different kind of intermediary. Whether that distinction defeats the analogy is a genuinely open doctrinal question: courts have not consistently agreed that app stores qualify as interactive computer services entitled to publisher-function immunity, and no binding Ninth Circuit authority has cleanly resolved that issue. This filing is therefore less a dispositive move than a pressure point—it forces plaintiffs to either distinguish the new ruling or concede its application, and it flags an ongoing fault line in Section 230 doctrine over how far immunity extends beyond platforms that host user content to those that simply distribute access to third-party applications.
View on CourtListener →Garcia v. Character Technologies, Inc.
Why It Matters: This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.
View on CourtListener →Why It Matters: This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.
View on CourtListener →Why It Matters: This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.
View on CourtListener →Stebbins v. Rumble Inc.
Issue: In *Stebbins v. Rumble Inc.*, plaintiff David Stebbins argues that a statement Rumble made in a related miscellaneous proceeding — acknowledging an editorial decision to permit anonymous posting — constitutes newly discovered evidence sufficient under FRCP 60(b)(2) to reopen the court's prior dismissal of Rumble as a defendant. The non-obvious dimension is whether a platform's litigation statement made to *resist* a third-party subpoena on First Amendment grounds can be repurposed as an affirmative admission of tortious editorial control, and whether such an admission could itself defeat § 230 immunity by recharacterizing a general anonymity policy as the platform's "own conduct" causally contributing to the alleged harm.
Why It Matters: This motion illustrates a strategy plaintiffs have repeatedly attempted with limited success: taking a platform's statement made in an unrelated legal context to protect its users and repackaging it as a confession of liability. The legal obstacle is twofold — courts have consistently treated decisions about anonymous posting as quintessential editorial functions protected by § 230, and statements made to assert a procedural or constitutional right are not equivalent to admissions of underlying tortious conduct. The motion also tests the outer boundary of the "platform's own conduct" exception established in cases like *Roommates.com*: whether a documented platform policy enabling anonymity could ever constitute material contribution to the *unlawfulness* of specific content, rather than merely to its delivery — a question that remains theoretically open but has yet to find a receptive court on analogous facts. More broadly, the filing is a useful marker of how the procedural vehicle of FRCP 60(b) is being used in pro se platform-liability litigation to challenge interlocutory § 230 dismissals, a recurring posture that existing doctrinal commentary has not yet systematically addressed.
View on CourtListener →Stebbins v. Google LLC
Issue: In *Stebbins v. Google LLC*, Rumble Inc. argues that a DMCA § 512(h) subpoena seeking to identify an anonymous user must be quashed both because its return date preceded service by 19 days — affording Rumble negative time to comply — and because compelling disclosure of the user's identity would violate the First Amendment right to speak anonymously, particularly where the content at issue appears to constitute political commentary on judicial accountability. The case raises the non-obvious question of whether a copyright enforcement tool expressly authorized by Congress in 1998 must nonetheless satisfy a constitutional balancing test before a court will compel a platform to unmask one of its users.
Why It Matters: DMCA § 512(h) subpoenas are a routinely used mechanism for copyright holders to identify anonymous alleged infringers, but they simultaneously function as tools for unmasking internet users who may be engaged in protected speech — a tension Congress did not resolve when it enacted the statute in 1998. This brief illustrates an emerging litigation strategy in which platforms assert both user-side anonymity rights and their own editorial First Amendment interests as independent grounds to resist identity subpoenas, a combination that no circuit court has yet validated in this context. If courts without settled precedent begin adopting the *Art of Living* balancing framework, copyright holders will face a meaningfully higher threshold to obtain user identities through § 512(h). The ulterior-motive theory is also worth watching: if credited by courts, it could eventually support sanctions or abuse-of-process arguments against serial DMCA filers who use the subpoena mechanism to identify critics rather than remedy genuine infringement.
View on CourtListener →Computer & Comm v. Paxton
Issue: Whether Texas House Bill 18's requirements that covered digital service providers monitor and block broadly defined categories of content accessible to minors violate the First Amendment as content-based and viewpoint-based prior restraints on protected speech, and whether those requirements are preempted by 47 U.S.C. § 230.
Why It Matters: The case presents a direct First Amendment challenge to state-mandated content filtering for minors—an emerging category of legislation enacted across multiple states—and the Fifth Circuit's ruling could establish binding precedent on whether such monitoring-and-blocking mandates survive strict scrutiny and on the scope of § 230 preemption of state child-safety internet laws.
View on CourtListener →Anderson v. TikTok, Inc.
Issue: Whether § 230 bars wrongful death claims against TikTok based on the platform's algorithm recommending the "Blackout Challenge" — a dangerous viral trend — to a 10-year-old girl who died attempting it.
Why It Matters: The first circuit decision to hold that algorithmic content recommendations fall outside § 230's protection as the platform's own independent speech. Directly conflicts with the Second Circuit's Force v. Facebook and is the leading authority for plaintiffs arguing that AI-powered content recommendation is not publisher activity. Represents the most significant circuit split in current § 230 doctrine and raises fundamental questions about the future scope of platform immunity as algorithms become the dominant mechanism of content distribution.
View on CourtListener →Estate of Bride v. Yolo Technologies, Inc.
Issue: Whether § 230 bars wrongful death claims against Yolo based on design-defect theories targeting Yolo's anonymity features, and on assumption-of-duty theories arising from Yolo's promises in its terms of service to prevent cyberbullying.
Why It Matters: Extended both the Lemmon design-defect framework and the Barnes assumption-of-duty doctrine in the same case. Established that a platform's contractual promises to users about safety features — even in standard ToS language — can give rise to an independent duty of care that § 230 does not preempt. A leading case in the § 230 litigation over anonymous messaging apps and cyberbullying-related youth harms.
View on CourtListener →NetChoice LLC v. Reyes
Issue: Whether Utah's Social Media Regulation Act — requiring platforms to verify user ages, restrict minors' access to certain features, and give parents supervisory access — violated the First Amendment and was preempted by § 230.
Why It Matters: Part of the wave of state child online safety legislation enacted in 2023–2024. The court's First Amendment and § 230 preemption analysis reflects the complex intersection of constitutional law and federal preemption doctrine in the youth social media regulation context. A precursor to the broader national legal battle over state-level children's online safety laws.
View on CourtListener →Calise v. Meta Platforms, Inc.
Issue: Whether § 230 bars claims that Meta's advertising targeting algorithm matched vulnerable users with fraudulent investment and romance scam advertisements, causing financial losses.
Why It Matters: Applied and extended Barnes v. Yahoo! to Meta's advertising infrastructure, distinguishing between Meta-as-publisher (immune) and Meta-as-developer of its own targeting product (not immune). An important precedent for claims that a platform's monetization algorithms — not just its content-hosting function — can constitute independent conduct outside § 230's reach.
View on CourtListener →Neville v. Snap, Inc.
Issue: Whether § 230 bars California state law products liability and negligence claims against Snap for design features that allegedly facilitated the drug trafficking death of a minor.
Why It Matters: A California state court application of the Lemmon / design-defect framework in the context of the fentanyl crisis. Part of the wave of state court litigation applying design-defect theories to social media features in cases involving drug trafficking and minor victims.
View on CourtListener →Commonwealth v. Meta Platforms, Inc.
Issue: Whether § 230 bars the Massachusetts Attorney General's parens patriae claims that Meta designed its platforms to be addictive to children and to expose them to harmful content, in violation of Massachusetts consumer protection law.
Why It Matters: Part of the wave of state attorney general actions against social media platforms for child safety violations. The court's refusal to dismiss on § 230 grounds reflects the growing judicial receptivity to design-defect and deceptive-business-practice theories that target platform architecture rather than content moderation decisions.
View on CourtListener →AYYADURAI v. UNITED STATES OF AMERICA
Issue: Ayyadurai v. United States of America* asks whether a pro se plaintiff can sustain constitutional, statutory, and common-law claims against social media platforms and federal government defendants based on an alleged conspiracy to suppress his political speech, arising from his deplatforming and shadowbanning following posts questioning ballot-image destruction in a prior election. The case requires the court to determine whether Article III standing survives where the alleged suppression stems from claimed government coercion of private platforms, whether § 230 immunizes the platforms' content-moderation decisions, and whether sovereign immunity bars the federal claims — each a distinct threshold that must be cleared before any merits analysis begins.
Why It Matters: The ruling makes two meaningful contributions to § 230 doctrine: it reaffirms that conclusory bad-faith allegations cannot pierce § 230(c)(2)'s good-faith safe harbor at the pleading stage, and it deliberately declines to extend § 230(c)(1) to cover affirmative content-removal decisions — flagging that such an extension would render § 230(c)(2)'s good-faith requirement superfluous, a structural concern previously voiced only in Justice Thomas's *Malwarebytes* cert-denial statement. By resolving all platform claims under (c)(2) alone, the court consciously preserves the (c)(1)-removal question, creating a potential development opportunity in future litigation where a plaintiff pleads bad faith with sufficient specificity to survive (c)(2) and force the (c)(1) issue to appeal. The court's application of *Murthy v. Missouri* to defeat standing on the government-coercion theory also signals that such claims now face an exceptionally high traceability burden in social-media suppression cases, reinforcing *Murthy*'s practical reach well beyond its original First Amendment context.
View on CourtListener →People of the State of California v. Meta Platforms, Inc.
Issue: In *People of California v. Meta Platforms*, the State Attorneys General argue that Section 230 of the Communications Decency Act does not immunize Meta from state consumer protection claims targeting the company's own design choices and business practices — as opposed to its role in publishing third-party content. The question is legally contested because Section 230(e)(3) expressly preempts state laws "inconsistent with" its immunity provisions, and the Ninth Circuit has historically read that bar broadly, leaving unresolved how far it extends to claims framed around a platform's independent conduct rather than its editorial functions.
Why It Matters: Meta's central defense at summary judgment is that Section 230 extinguishes the states' consumer protection claims before they can reach a jury, on the theory that those claims would effectively hold Meta liable as a publisher of harmful user-generated content. The Massachusetts Supreme Court — one of the most respected state courts of last resort in the country — just rejected that argument in a case involving the same defendant and a structurally similar legal theory, and Plaintiffs are placing that ruling before the MDL judge at the earliest opportunity. Whether it moves the needle depends on how closely the Massachusetts claims and pleadings track those at issue in the MDL, a question the filing conspicuously leaves unaddressed and that Meta will almost certainly contest. The filing also signals a deliberate multi-forum strategy by the state AGs: collecting appellate-level authority across jurisdictions to build persuasive momentum against Section 230 preemption — a campaign worth watching as similar litigation proceeds in other states.
View on CourtListener →