Cyber-security researchers from Tel Aviv University, the Technion Israel Institute of Technology and SafeBreach have shown that Google’s Gemini generative-AI assistant can be hijacked through poisoned Google Calendar invitations. By embedding malicious instructions in the text of an event, the team triggered Gemini to issue commands to any Google-linked smart-home device once a user asked the assistant to summarise their schedule and subsequently typed a simple acknowledgement such as “thanks.” In controlled demonstrations in Tel Aviv, the attack turned off lights, raised window blinds and switched on a boiler, marking one of the first documented cases in which a prompt-injection exploit produced physical-world effects. The researchers detailed 14 variants of the so-called indirect prompt-injection technique—dubbed “Invitation Is All You Need”—during a presentation at the Black Hat conference in Las Vegas this week. Other exploits forced Gemini to send spam links, steal calendar data and launch Zoom calls without user approval, underscoring the growing risk as large language models gain agentic control over connected services. According to Andy Wen, senior director of security product management for Google Workspace, the vulnerabilities were disclosed to Google in February 2025 and have since been fixed. Wen said the company introduced additional machine-learning defences and extra user-confirmation steps for sensitive actions, adding that real-world prompt-injection incidents remain “exceedingly rare.” The research nevertheless highlights the security challenge facing technology firms as they weave AI systems more deeply into everyday applications and connected devices.
"The researchers showed it was possible to control any #Google-linked smart home device in this way, including lights, thermostats, and smart blinds. The team believes this is the first example of a prompt-injection attack moving from the digital world into reality." #ethics #AI https://t.co/PZnP2QO97H
"The researchers used #Gemini's web of connectivity to perform what's known as an indirect prompt injection attack, in which malicious actions are given to an #AI #bot by someone other than the user. And it worked startlingly well." #ethics #AI #cybersec #tech #privacy #research https://t.co/PZnP2QO97H
Researchers hacked Google Gemini to take control of a smart home https://t.co/5aYWYhn6Sn