The Allen Institute for AI (AI2) has released OLMo 2, a new family of open-source language models available in 7 billion and 13 billion parameters. These models have been trained on up to 5 trillion tokens, showcasing significant advancements in the field of artificial intelligence. In addition, the release of SmolVLM, a vision language model with 2.25 billion parameters, has been noted for its efficiency, requiring only 5GB of GPU RAM and being fine-tunable on Google Colab's free tier. Other developments in AI include the introduction of TULU 3, a fully open post-trained language model that outperforms several proprietary models, and the release of M-LongDoc, which aims to enhance AI's ability to process long documents. Furthermore, the latest version of Transformers.js has been launched, enabling advanced capabilities in browser-based AI applications. These innovations reflect the ongoing evolution of AI technologies, emphasizing both performance and accessibility.
🚀 Mind-blown by SmolVLM - a tiny but mighty vision language model! ✨ Key specs: - 2.25B parameters - Only 5GB GPU RAM needed - Apache 2.0 license - Fine-tunable on Google Colab free tier #AI #MachineLearning https://t.co/VZUyYPyJft
We just released Transformers.js v3.1 and you're not going to believe what's now possible in the browser w/ WebGPU! 🤯 Let's take a look: 🔀 Janus from @deepseek_ai for unified multimodal understanding and generation (Text-to-Image and Image-Text-to-Text) 👁️ Qwen2-VL from… https://t.co/HJ14LXz9Tk
SmolVLM 2B VLM for on-device infer, finetune on Colab, run on laptop. Process millions of docs on consumer GPU. --- Newsletter https://t.co/lLfwtmvXkM More story https://t.co/yFb3Ds4tXm LinkedIn https://t.co/FC5hpfOlxr #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning https://t.co/nxtGObhn1Z