Meta Platforms Inc. on Wednesday broadened its artificial-intelligence lineup with the release of V-JEPA 2, a large “world model,” and the announcement of a generative video-editing tool that will be integrated into its consumer applications later this year. V-JEPA 2 is an open-source, 1.2-billion-parameter model designed to understand and predict how objects move in three-dimensional space. Meta says the system learns from unlabeled video and enables robots, industrial machines and self-driving cars to plan and act in unfamiliar environments without additional task-specific training. Separately, the company will roll out an AI-powered video-editing feature across the Meta AI app, its website and the standalone Edits app. The service will let users apply text prompts and more than 50 preset styles to clips of up to 10 seconds, with access offered free for a limited time. By open-sourcing advanced research while embedding consumer-facing AI tools into Facebook, Instagram and WhatsApp, Meta aims to attract developers and keep users engaged as competition in generative AI intensifies with rivals such as Alphabet and Snap.
Marre des conversations WhatsApp à rallonge ? L’appli va vous les résumer https://t.co/vlmal3JrCb
WhatsApp estrena su función más controvertida, los resúmenes de chats privados y grupos 👇 https://t.co/RGHsvobf1I
Meta AI Releases V-JEPA 2: Open-Source Self-Supervised World Models for Understanding, Prediction, and Planning Meta AI has released V-JEPA 2, an open-source video world model designed to learn from large-scale unlabeled video data using a self-supervised joint-embedding https://t.co/fAEQsEZFUk