Google has expanded its AI Mode, initially rolled out last month, to include multimodal capabilities through its Lens feature. This enhancement allows users to analyze various objects in images and receive advanced responses to complex queries. The updated AI Mode is now available to Google One AI Premium subscribers and will also be accessible to millions of experimental users in the U.S. Additionally, Google has announced its commitment to adopting the Model Context Protocol (MCP), an open standard developed by Anthropic AI, which connects AI models to tools and data, enhancing their functionality. This protocol is expected to be integrated into Google's Gemini models and SDK, further advancing the company's enterprise AI offerings.
Google DeepMind has said that they will be adding support for Model Context Protocol, a standard released by Anthropic AI that connects LLM applications with tools https://t.co/ASAwiq67rE
Over the past couple weeks, I've been really immersed in learning about MCP, a new protocol for equipping any LLM with a set of tools that can run on your own machine or a remote server you control and give all kinds of superpowers to AI agents to do things like search, etc.
Google DeepMind의 CEO 데미스 허사비스가 X(구 트위터)를 통해 깜짝 발표. Gemini 모델과 SDK에 Anthropic의 MCP(Model Context Protocol)를 지원 예정 https://t.co/EzrLBTL4rw