
Hugging Face has introduced SmolLM, a series of state-of-the-art small language models designed for on-device deployment. The SmolLM series includes models with 135M, 360M, and 1.7B parameters, optimized to run efficiently on mobile devices and other local hardware. These models outperform other small models such as MobileLLM, Qwen2, and Phi1.5. SmolLM models are fully open-source, with datasets and training code available, and are trained on high-quality data sources like FineWeb-Edu and Cosmopedia v2. Additionally, the models are licensed under Apache 2.0 and can be run locally in browsers using ONNX weights and WebGPU acceleration. The series also incorporates Llama architecture and Mistral tokenizer, and some models are trained on 6T tokens. This release signifies a significant advancement in making powerful AI accessible on personal devices without relying on cloud infrastructure.











Hugging Face Releases SmoLLM, a Series of Small Language Models, Beats Qwen2 and Phi 1.5 https://t.co/i21TkDMre7 #models #model #cosmopedia #v2 #language #smollm #finewebedu #face #series #parameters https://t.co/6UsIsZdpFm
Hugging Face’s SmolLM models bring powerful AI to your phone, no cloud required https://t.co/bGWoiF1xv8
Hugging Face Introduces SmolLM: Transforming On-Device AI with High-Performance Small Language Models from 135M to 1.7B Parameters #DL #AI #ML #DeepLearning #ArtificialIntelligence #MachineLearning #ComputerVision #AutonomousVehicles #NeuroMorphic https://t.co/u2GT3S1G1t