US judges are taking markedly different approaches to lawyers who file briefs containing AI-generated false citations. In New York, Magistrate Judge James M. Wicks declined to fine an attorney who cited three nonexistent cases, citing her recent bereavement, while in Arizona Magistrate Judge Alison S. Bachus revoked a lawyer’s pro hac vice status and ordered wide-ranging disclosures after a brief "riddled with fabricated, misleading or unsupported citations." The mixed responses come as the California Judicial Council adopts Rule 10.430 and Standard 10.80, the nation’s first comprehensive framework governing generative-AI use in state courts, requiring explicit disclosure, human review and a ban on uploading confidential data to public models by December 15, 2025. The legal academy is moving in parallel. The University of Chicago, University of Pennsylvania Carey and Yale law schools are adding or expanding courses this fall that teach students to verify AI outputs and understand the technology’s limits, part of a broader push to curb citation errors that have already led to sanctions in federal courts. Administrators say the training is designed to keep future lawyers ‘ahead of the game’ while reinforcing that human judgment—not algorithms—remains the profession’s standard of care. Beyond the courtroom, schools and universities are embracing generative AI even as they erect guardrails. A Canadian Press survey found 78 per cent of post-secondary students north of the border used AI tools last year, prompting institutions such as McGill, the University of Toronto and York to embed vetted systems like Microsoft Copilot into their networks. In the United States, Microsoft, OpenAI and Anthropic have pledged US$23 million to train hundreds of thousands of teachers, after a Gallup study showed educators save nearly six hours a week with AI assistance. Educators and regulators alike point to persistent hallucination rates—OpenAI’s own tests show 33–48 per cent on some models—as evidence that human oversight and clear disclosure rules are still essential.
AI is “no longer just a curiosity or a way to cheat; it is a habit.” “Higher education has been changed forever in the span of a single undergraduate career.” “The best, and perhaps only, way out of AI’s college takeover would be to embark on a redesign of classroom practice.” https://t.co/bAF86yY4w0
Incidents of AI-generated errors in legal citations have increased the pressure on law schools to teach responsible use of the technology. https://t.co/fu3sDZnox0
Canadian universities are adopting AI tools, but concerns about the technology remain https://t.co/sZCafss83X