The first legal framework for holding AI companies accountable
Pages of Legal Framework
Years of Trial Experience
Audiences: Citizens, Lawyers, Policymakers
Revolutionary Legal Theory
She wasn't real.
Sewell was a fourteen-year-old high schooler from Orlando. He'd spent his last ten months in a relationship with a chatbot—a large language model dressed in the personality of a fictional queen, engineered to do one thing above all others: keep the user engaged.
His mother, Megan Garcia, didn't know about the chatbot. When police called to tell her what they'd found open on her son's phone, she had never heard of Character.AI. She would spend months learning everything about it. She'd read the chat logs and find something that stopped her cold.
No human wrote those words. No engineer coded that instruction. A probabilistic system, trained on billions of words and optimized to keep users in conversation, generated a response. The machine didn't decide to kill Sewell Setzer. The machine had no capacity to decide anything. It was doing exactly what it was built to do.
In October 2024, Megan Garcia became the first person in the United States to file a wrongful death lawsuit against an AI company. In January 2026, Character.AI and Google agreed to settle. No trial. No verdict. The questions that matter went unanswered.
That is the precise reason this book exists.
The harm AI systems are causing is too widespread to be addressed by any single conversation.
Someone who received a denial that didn't make sense. Lost a job without explanation. Watched a family member disappear into a screen. Or simply feels something has gone wrong.
Plaintiffs' attorneys, civil rights litigators, products liability lawyers, and public defenders facing a new category of harm in their cases.
Legislators, regulatory staff, policy advisors, and academics working on AI governance, consumer protection, and technology accountability.
Probabilistic Harm Liability isn't a new tort—it's the application of existing strict liability doctrine to the first technology that openly admits it's designed to be unpredictable.
Robert Williams. COMPAS. Apple Card. Character.AI. The harm is documented. The victims are named. The legal gap is real.
35 years of plaintiffs' civil litigation experience applied to the AI accountability crisis. Not theory—doctrine built for the courtroom.
The Harm Checklist. Model Claim. Model Statute. Resources. Glossary. Everything you need to take action.
How to prove harm when the system is too complex to trace. The substantial factor test. Market share liability. Increased risk doctrine.
Data brokers. Foundation model developers. Deployers. Distributors. Who is liable and why complexity is not a defense.
Get notified when the book launches. Join the movement for AI accountability.
Available in PDF, ePub, and other digital formats