Elon Musk-backed xAI debuts its first multimodal model, Grok-1.5V https://t.co/t60QB2WWJl
Got a taste of @Telsa's FSD v12.3.4 last night. By no means flawless, but the human-like driving maneuvers (with no interventions) delivered a magical experience. Excited to witness the recipe of scaling law and data flywheel for full autonomy show signs of life in real products.…
Tesla FSD v13 will likely be grokking language tokens. What excites me the most about Grok-1.5V is the potential to solve edge cases in self-driving. Using language for "chain of thought" will help the car break down a complex scenario, reason with rules and counterfactuals, and… https://t.co/kZtbVFrIdG

The recent announcement of Grok-1.5, a multimodal AI model by xAI, backed by Elon Musk, signifies a major leap towards improving Full Self-Driving (FSD) capabilities in autonomous vehicles. This model, which approaches the performance level of GPT-4V, incorporates examples ranging from memes to spatial understanding, showcasing its versatility. Users have reported significant improvements in FSD version 12.3.4, noting experiences of zero disengagements over 150 km and enhanced confidence in complex scenarios such as parking lots. Another user highlighted a flawless 52 min drive with 0 reports and 0 interventions in Houston. The integration of Large Language Models (LLMs) like Grok-1.5V in autonomous driving aims to address edge cases more effectively by employing a 'chain of thought' process for reasoning through complex scenarios. This development has sparked excitement within the AI and automotive communities, as it promises to bring us closer to the reality of fully autonomous vehicles.










