Artificial intelligence poses a rapidly growing threat to national security, according to Jonathan Hall KC, the UK government’s independent reviewer of terrorism legislation. In his annual report released on 15 July, Hall described generative AI as a “coming wave” that could supercharge extremist propaganda, facilitate attack planning and create a closed loop of online radicalisation through chatbots. Hall warned that terrorist chatbots—modelled on popular conversational systems—could screen and groom recruits, offer step-by-step guidance on weapons or security evasion and flood social media with tailored disinformation. He cited the case of Jaswant Singh Chail, who consulted an AI program named “Sarai” before entering Windsor Castle with a crossbow in 2021, as evidence of the technology’s real-world reach. The watchdog said existing UK laws may be inadequate to curb AI-driven extremism and called for consideration of new offences targeting software designed to incite hatred or violence. He also highlighted a burgeoning market for “jailbreaking” safety guardrails on commercial models, urging faster government and industry action to protect against the emerging risk.
Sex chatbots show danger AI could be used to plan terror attacks, watchdog warns https://t.co/sevfnQ44qS
🔴 Extremists could use AI for terror attacks, Government warned https://t.co/dzcgxiEJTU
Flirty chats have been exposed after leak at AI dating advice chatbot. https://t.co/oSOhDjFZFO