A look at potential issues as US judges join lawyers in testing generative AI to speed up legal research, summarize cases, draft routine orders, and more (@odonnell_jm / MIT Technology Review) https://t.co/DM3qUj0COS https://t.co/y0bepERY4J https://t.co/ZOzeer2dpR
Machine learning is saving office workers from laborious research, but it could herald the end of professional services’ fee structure. In this Viewsroom podcast, @breakingviews columnists debate how lawyers and other consultants can mitigate the risk https://t.co/X5EpycjEx9
Training Artificial Intelligence and Employer Liability: Lessons from Schuster v. Scale AI https://t.co/oGvZaNdoq4 | by @ebglaw
A growing number of US judges are experimenting with generative-AI systems to draft routine orders, summarise filings and speed legal research, expanding the technology’s footprint beyond law firms. Federal judges such as Xavier Rodriguez in Texas and magistrate judge Allison Goddard in California say they rely on tools like ChatGPT or Anthropic’s Claude mainly for first drafts and case summaries, cross-checking outputs against traditional databases from Westlaw or Lexis. Early adoption has already produced high-profile errors. In June a Georgia appellate judge issued an order that relied on non-existent precedents, while in July a federal judge in New Jersey withdrew an opinion after lawyers showed it contained AI-generated hallucinations. On 4 August a Mississippi federal judge re-issued a civil-rights ruling after similar mistakes surfaced, declining to explain the origin of the errors. The slip-ups have prompted warnings from peers such as Louisiana appeals-court judge Scott Schlegel, who calls the trend a “crisis waiting to happen” because judicial mistakes immediately become binding law. In February, the Sedona Conference published voluntary guidelines urging judges to limit AI use to administrative tasks and to verify all citations, noting that “no known GenAI tools have fully resolved the hallucination problem.” The judiciary’s experience mirrors wider legal-industry tensions. A pending class action in the Northern District of California, Schuster v. Scale AI, alleges that contractors suffered PTSD and other harms after repeatedly labelling violent content for AI training, underscoring employer liability risks. Combined with a patchwork of state and federal directives governing AI in hiring and other workplace functions, the incidents highlight the need for formal standards before courts—and their stakeholders—come to rely on machine-generated text.