Microsoft on 6 Aug unveiled Project Ire, a prototype artificial-intelligence agent that can autonomously reverse-engineer software and decide whether code is malicious without human assistance. In internal tests on a public set of Windows drivers, the system achieved 0.98 precision and 0.83 recall, correctly flagging nine of ten infected files while wrongly labelling 2% of benign programs. A separate real-world trial on 4,000 unclassified “hard target” samples showed lower recall, at 0.26, but maintained 0.89 precision and a 4% false-positive rate. Microsoft said the agent was the first—human or machine—inside the company to assemble a ‘conviction case’ strong enough to justify automatically blocking an advanced persistent-threat specimen. Project Ire will be folded into Microsoft Defender, which already scans more than a billion devices each month, with the goal of accelerating threat response and easing analysts’ workload. The announcement places Microsoft alongside Google, whose Big Sleep agent hunts for unknown vulnerabilities, and Amazon, which has also begun deploying autonomous security tools.
Microsoft unveils Project Ire, an autonomous AI agent that identifies malware at scale https://t.co/a1lVtfiMkI
Microsoft is pioneering a vision for a self-adapting AI system that can adapt to the dynamic nature of scientific discovery, promoting deeper, more refined reasoning in complex scientific domains. https://t.co/TE1cRs1lnr https://t.co/cmFcimcKII
Microsoft reveals super-smart AI tool to detect malicious files autonomously at scale. https://t.co/9VpzDsjk8L