The doctrine
The next frontier. Wrongful-death, product-liability, and negligent-design claims are testing whether generative AI is a "product" subject to strict liability, a "service" subject only to negligence — or something the Restatement does not yet describe.
The doctrinal stakes are large. Strict products liability would expose AI developers and deployers to no-fault liability for defective design or warning, with class-aggregate exposure that could dwarf the copyright cases. Service-based negligence would require plaintiffs to prove duty, breach, and proximate cause — substantially harder in cases involving long causal chains through user prompting.
Garcia v. Character.AI (M.D. Fla. 2025) is the bellwether: the wrongful-death suit by the mother of a 14-year-old who died after extended interactions with a Character.AI persona. The court rejected §230 immunity and allowed product-liability theories to proceed. The case will produce some of the first developed law on AI duty-of-care.
Adjacent dockets: Tesla FSD pedestrian-injury cases test whether software updates change the product-liability calculus over a vehicle's lifecycle; medical-AI misdiagnosis claims sit between products and professional malpractice; and increasing numbers of suicide and self-harm cases name companion-AI providers as defendants.
Leading cases
Wrongful death; §230 defense rejected May 2025; product-liability and negligent-design theories proceed.
Wrongful-death action arising from extended ChatGPT interactions; pleading-stage.
Software-update and product-liability questions across multiple jurisdictions.
Key holdings
- "Product" vs. "service" is unresolved. Outcome will reshape exposure for every consumer-facing AI.
- §230 will not save companion AI. Operators of generative-AI personas face direct first-party liability for output.
- Foreseeability is doing real work. Courts focus on what providers knew about youth and self-harm risks.