Apple researchers detail an AI system that can resolve ambiguous references to on-screen entities, in some cases better than GPT-4 can, and can run "on-device" (@michaelfnunez / VentureBeat) https://t.co/jZmQwoHB6t 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/MIPUqpKhMS
ReALM can understand entities currently displayed on a user’s screen, which could enhance interactions with Siri and other Apple services. 😲 #Apple #AI #GPT-4 Apple says its latest AI model ReALM is even better than OpenAI's GPT4 https://t.co/Qg4D7p8JIT
Supercharged Siri. AI image editing. Smart “snapshots” of your day. We asked some experts to forecast how Apple might use Google’s Gemini platform to enable new AI-powered applications in iOS. https://t.co/IDNOqVIdeX




Apple is poised to unveil its artificial intelligence (AI) strategy at the upcoming Worldwide Developers Conference (WWDC24), potentially showcasing advancements like supercharged Siri capabilities, AI image editing, and smart daily snapshots. The company's latest AI model, ReALM, is highlighted for its ability to understand entities displayed on a user's screen, enhancing interactions with Siri and other Apple services. ReALM reportedly performs some tasks better than OpenAI's GPT-4 and can operate directly on devices. This move by Apple signals a significant step in integrating AI more deeply into its ecosystem, leveraging Google’s Gemini platform for new AI-powered applications in iOS.