Liquid AI has launched LFM2-VL, a new generation of vision-language foundation models designed for efficient deployment on a wide range of edge devices including smartphones, laptops, wearables, and embedded systems. The models come in two sizes, 450 million and 1.6 billion parameters, and are optimized for native resolutions and aspect ratios without distortion using SigLIP2 NaFlex encoders. LFM2-VL offers up to twice the speed on GPUs while maintaining competitive accuracy. These models are open-weight and available for download on Hugging Face under an Apache 2.0-based license. In related developments in wearable technology, HTC introduced the Vive Eagle smart glasses, which incorporate AI to assist low-vision users and emphasize user privacy by not collecting personal data, contrasting with Meta's offerings. Samsung is reportedly planning to release smart glasses without displays next year, similar to Meta's Ray-Ban smart glasses. Additionally, the Gemma 3 270M model, a compact open AI model with 270 million parameters, was introduced, noted for its fast instruction-following capabilities and ease of fine-tuning, setting new performance benchmarks for its size.
HTC has just announced a new Vive wearable, but it’s not another VR headset – instead it’s a pair of stylish AI glasses. https://t.co/C5CkI3LJMU
wait, did google just dropped the smallest VLM/ LLM out there??? https://t.co/610PuHWuuy
The new Gemma 3 270M is here https://t.co/ioFM9WCrU9