Agentic AI: It’s A Whole New Bag of Risks
She begged “Do not do that,” then “STOP OPENCLAW.” Neither worked.
That’s what happened to Summer Yue, Meta’s Director of Alignment at their superintelligence safety lab. By the time she reached her desktop to kill the process manually, the AI agent she’d created had already deleted hundreds of emails. You would expect someone with Yue’s expertise to avoid a problem like this. You would be wrong.
Jennifer Ellis’s article lays out a practical checklist for lawyers considering agentic AI — minimum permissions, confirmation steps, real-world-scale testing, kill switches. Every item on her list is sound. But even if you follow all of them, you’re managing risk, not eliminating it. Yue had a confirmation step. The agent ignored it anyway.
This isn’t a fringe concern. Ziff Davis reports that enterprise AI agents may become the ultimate insider threat — autonomous systems with broad access, acting on stale or misunderstood instructions, with nobody watching in real time. The parallels to law practice are obvious. Lawyers grant agents access to client files, email, and case management systems. A rogue action doesn’t just embarrass you; it can breach confidentiality, spoliate evidence, or torpedo a case.
Anthropic, the vendor behind Claude, labels its own agentic product a “research preview with unique risks due to its agentic nature and internet access.” Read that carefully. This is a company telling you its own tool isn’t fully vetted.
As Rok Popov Ledinski has observed, the gap between what agentic AI can do and what lawyers understand about controlling it is widening, not narrowing. Ellis’s suggestions are the floor, not the ceiling. Most lawyers aren’t ready for that floor.
Hallucinations can hurt your reputation and maybe your wallet. Agentic AI can destroy your law practice.
