Google is expanding the feature set of its Gemini artificial-intelligence assistant, testing three new modes that aim to make the service more versatile across mobile and workspace products. Agent Mode is designed to let Gemini autonomously plan and complete multi-step tasks, while Gemini Go focuses on collaborative brainstorming and rapid prototyping, potentially linking to Google’s Canvas tools. Immersive View would present answers in richer visual or video formats, extending the visual overviews already offered in NotebookLM. The trial follows a series of recent enhancements, including a memory function that personalizes responses, a “ghost mode” that deletes session history on demand, and a Drive sidebar feature that allows users to ask Gemini questions about images such as receipts or contracts. Google is also embedding Gemini Live in the new Pixel 10 smartphones, enabling voice or camera-based conversations about what the phone sees. The company has not provided a release timetable, but the clustered tests signal an accelerated push to make Gemini a ubiquitous layer across Google hardware and cloud applications.
Google continues to integrate Gemini everywhere it can, recently giving the chatbot the ability to answer questions about images uploaded to Google Drive. https://t.co/14ozVjHnq2
Have you tried Storybook in the @GeminiApp yet? Whether it's bringing funny group chats to life or creating a special, personalized send-off for a team member, there's so much more than bedtime stories. The possibilities are endless! Available today for Google AI subscribers in https://t.co/Pg0AongLi0
Ooooh would ‘ya look at that?! 👀 Have free-flowing conversations with @GeminiApp on #Pixel10. Instead of typing, share your camera in Gemini Live to ask questions about what you see. The sky’s the limit with Pixel 10: https://t.co/IT1O2OKKTp https://t.co/WFjPAFHz5X