|
More Liability Will Make AI Chatbots Worse At Preventing Suicide
Techdirt
· 2026-05-07
Commentary
This Techdirt post discusses California and New York legislation imposing liability on AI chatbot providers for mental-health-related conversations, arguing that such liability regimes will cause chatbots to retreat from beneficial mental health engagement through defensive over-restriction. Drawing on academic research showing widespread positive use of chatbots for mental health support, the post argues—via a law review article by Professor Jess Miers—that reducing liability for AI providers (analogizing to Section 230's effect on platform behavior) may paradoxically produce safer, more helpful AI systems. The post is directly relevant to the newsletter's AI liability and Section 230 pillars because it engages with how liability exposure shapes AI developer behavior, the design-defect and failure-to-warn theories driving state legislation, and the Section 230 analogy as a framework for thinking about AI chatbot liability.
Key point: The post argues that increasing civil liability for AI chatbots in mental health contexts will—like over-moderation driven by platform liability fears—suppress genuinely beneficial AI engagement, and that Section 230-style liability reduction may be the better policy model for AI mental health tools.
Read post →
|