#AI hallucinations, where systems generate factually incorrect outputs, pose serious implications for legal #professionals who are using these tools for research: Debajyoti Chakravarty https://t.co/kapgojljEV
AI in the Financial Services Industry https://t.co/pbuHgPq6rm | by @ballardspahrllp
Dispatches From The AI Bubble: ILTACON 2025 https://t.co/kmfUWWK0Wx
California’s court system on 19 August issued statewide guidance restricting how lawyers may deploy generative artificial-intelligence tools in pleadings and research, becoming the latest jurisdiction to tackle the mounting problem of fictitious case citations produced by AI “hallucinations.” U.S. judges are escalating enforcement. In the Eastern District of New York, Magistrate Judge James M. Wicks found counsel Suryia Rahman had incorporated three nonexistent precedents but, citing mitigating personal circumstances, limited sanctions to an admonition and a requirement to inform her client. By contrast, Magistrate Judge Alison S. Bachus in Arizona this month revoked a lawyer’s pro hac vice admission, struck her brief and ordered apology letters after multiple fabricated citations were uncovered. The trend is global. Australia’s Federal Court referred a lawyer to the Legal Practice Board of Western Australia and imposed A$8,371 in costs when an immigration submission relied on four phantom cases generated by Anthropic’s Claude AI and Microsoft Copilot—one of more than 20 reported hallucination incidents in Australian courts since 2023. Education providers are responding. The University of Chicago, the University of Pennsylvania and Yale are rolling out new classes on generative AI, secured ChatGPT instances and practical exercises aimed at ensuring students verify machine outputs. Faculty say the curriculum changes are intended to prevent the professional lapses now drawing court sanctions. Industry data point to broader governance gaps. An Okta study released this week found 91 % of organisations already deploy AI agents while only 10 % maintain mature identity-management protocols for them, underscoring regulators’ concerns that unchecked use of AI could jeopardise both legal integrity and cybersecurity.