
Apple has announced the development of a new artificial intelligence model named ReALM (Reference Resolution As Language Modeling), which it claims surpasses the capabilities of OpenAI's GPT-4 in understanding and handling context. This includes the ability to parse contextual data more efficiently, which could significantly enhance interactions with Siri and other Apple services by understanding entities currently displayed on a user's screen. Apple's AI researchers have developed ReALM as an on-device model, emphasizing its smaller size (3B) and faster performance compared to GPT-4. The company suggests that ReALM could lead to major updates for iOS and macOS at WWDC 2024, potentially revolutionizing conversational AI by employing large language models (LLMs) for superior reference resolution across multiple entities. This breakthrough has been highlighted as a significant advancement in AI, with Apple suggesting that ReALM could be made available to developers in future iOS 18 updates.
Apple's new AI aims to take on GPT-4 with its ability to understand context clues https://t.co/J5uVS99X6Q
Daily AI News in 60 Seconds 1/8 Apple's ReALM can understand your screen context, enabling more natural voice interactions. It outperforms GPT-4 in resolving ambiguity and context by treating reference resolution as a language modeling problem. https://t.co/hnXDQe0LtA
Apple's Latest LLM Model ReALM Beats GPT-4, Yet Apple Says It Wants to Partner With Gemini š Apple has dropped a paper titled "ReALM: Reference Resolution As Language Modeling." The paper focuses on contexts that include conversational turns and non-conversational entities,⦠https://t.co/K08pZoHvTG








