I saw a lot of AI-driven smart glasses at CES but these are among the nicest in the "lightweight" group. Which have green-and-black screens in their optics. Have you gotten a pair yet? I find I prefer just using my iPhone and AirPods to talk with LLMs when I'm walking around,… https://t.co/uOuCAnqBFN
The future of AR and Smart Glasses will be AI and LLMs! Really cool demo by @brilliantlabsAR, which integrates the @GoogleDeepMind Gemini Live API into their glasses to be real-time AI assistant. 👀 💡Combine Smart glasses with LLMs for a real-time assistant, translating text… https://t.co/57Savs8oMr
👇Gemini 2.0 for multimodal experiences on smart glasses! https://t.co/0WYvHyEQei
Google has launched its Gemini 2.0 Pro, which is designed to handle complex tasks with a 2 million-token context window. This new model is part of a broader initiative that includes the integration of AI with YouTube, Search, and Maps, potentially positioning it as a competitor to OpenAI's offerings. Additionally, Vuzix Corporation has introduced AugmentOS, an operating system aimed at enhancing AI-powered assistance through smart glasses, specifically designed for frontline workers. This development is expected to transform fieldwork by providing real-time support. Vuzix is collaborating with Mentra to accelerate developer adoption in the smart glasses sector. The advancements in both the Gemini 2.0 and Vuzix's AugmentOS highlight the growing intersection of AI and augmented reality technologies.