When it comes to using AI, some lawyers just can’t help themselves. Last year courts increasingly sanctioned attorneys for filing briefs that contained AI-generated errors. The most prominent example involved lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs with fictitious, AI-generated citations.
That case didn’t stop others. Damien Charlotin, a researcher at HEC Paris who tracks court sanctions for AI errors, says he has logged more than 1,200 instances worldwide so far, about 800 from U.S. courts, and the rate keeps rising. “We have this issue because AI is just too good — but not perfect,” he says.
Penalties have grown larger. A federal court recently ordered an Oregon lawyer to pay $109,700 in sanctions and costs for filing AI-generated mistakes — possibly a new record. State high courts have also confronted the issue. Nebraska’s Supreme Court questioned Omaha attorney Greg Lake about a brief that cited fictitious cases; he denied using AI and blamed a malfunctioning computer, but the court referred him for discipline. A similar episode occurred at the Georgia Supreme Court.
Carla Wale, associate dean of information & technology and director of the University of Washington law library, is developing optional AI ethics training for law students. She stresses that the core professional rule remains unchanged: lawyers are responsible for the accuracy of their filings no matter how they were produced. “Whatever the generative AI tool gives you — as in, ‘Look at these cases’ — you, under the rules of professional conduct, you have to read those cases. You have to read the cases to make sure what you are citing is accurate,” she says.
Some courts are adopting rules that go further, requiring lawyers to label filings that used AI, often with specifics about how AI was used. These rules aim to flag documents that need extra checking and to draw a clear line between human and machine work. But not everyone thinks labeling will help. Joe Patrice, senior editor at Above the Law, argues such requirements will soon be impractical because AI is becoming embedded across legal software. If everything is “AI assisted,” labeling loses meaning and will be ignored.
Patrice acknowledges AI’s usefulness for searching evidence, reviewing case law, and handling contracts. He worries most about “agentic” systems that promise to complete legal work end-to-end. “I think once you obscure those middle steps, that’s where mistakes happen,” he says. Even diligent lawyers can miss errors if they’re not involved in the process.
AI-driven efficiency also threatens the traditional law firm billable-hours model. If tools speed tasks, firms may either take lower fees or shift to new billing methods, such as itemizing tasks. That could increase time pressure and tempt lawyers to accept AI’s first drafts without sufficient review. Patrice fears future lawyers raised with always-on AI may lose the habit of pausing to think critically.
Wale shares concerns about eroding analytical skills but rejects the notion that AI will replace lawyers entirely. “I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don’t,” she says.
AI has also become a target of legal action. In March, Nippon Life Insurance Company of America sued OpenAI in federal court in Illinois, alleging the company’s ChatGPT provided bad legal advice that led to frivolous actions and accusing OpenAI of practicing law without a license. OpenAI said the complaint “lacks any merit whatsoever.”
For now, the legal profession continues to adapt. Courts are enforcing long-standing duties of competence and accuracy, lawyers and educators are developing training and policies, and debates continue over labeling, supervision, and the role of AI in legal work. The central theme remains: AI can be a powerful aid, but attorneys bear the responsibility to verify what it produces.