Google is trialing three experimental modes for its Gemini generative-AI assistant—Agent Mode, Gemini Go and Immersive View—according to information circulating among developers. Agent Mode is designed to plan and complete multi-step tasks autonomously, Gemini Go focuses on collaborative ideation and rapid prototyping, while Immersive View produces visual or video-style responses that extend the system’s existing “Video Overviews” capability. The tests follow a series of Gemini upgrades announced or rolled out in recent days. Users with paid Workspace plans can now question Gemini about images stored in Google Drive, and a new memory system lets the assistant personalize answers while a “ghost” option deletes individual conversations. Google has also introduced the Gemini 2.5 Flash image model to speed up image generation. No release timetable has been provided, but the expanding feature set underscores Google’s strategy of embedding Gemini deeper into Workspace and consumer services as competition with OpenAI, Microsoft and Anthropic intensifies.
🤖 AI agents that aren’t just workflows—Auto Agents remember, learn, and evolve entirely on-chain. 🔮 Automated trading and personal AI assistants are just the beginning. How could truly autonomous agents with permanent, verifiable memory reshape your industry? 🫵 You push the https://t.co/XAghDeZf7o
Millions of dollars, months of effort. Until now. An AI tool now builds agents, outcodes 90% of humans, runs 24/7, never quits, and costs less than a Netflix subscription. 10 examples of what it can do:
🚨Breaking: This AI tool just changed app building forever. We typed our idea in plain English → 14 minutes later, it was live, tested, and working. → No code. → No team. → No broken builds. The future of software is here: