LLMWare has launched Model Depot, an extensive collection of Small Language Models (SLMs) designed for Intel PCs. This initiative aims to enhance the accessibility and deployment of SLMs in various applications. Additionally, a recent survey has been published that comprehensively reviews SLM architectures, training techniques, and model compression methods, emphasizing efficient deployment in resource-constrained environments. In the realm of Multimodal Large Language Models (MLLMs), several new models have been introduced, including Pangea-7B, which outperforms existing open-source models such as Llama 3.2 11B. Researchers from Shanghai AI Laboratory, Tsinghua University, Nanjing University, Fudan University, and The Chinese University of Hong Kong have developed Mini-InternVL, a series of MLLMs ranging from 1B to 4B parameters, achieving 90% of the performance of larger models while utilizing only 5% of the parameters. Furthermore, a new tool has been released that offers high-speed LLM inference with multi-device support and extensive quantization options for diverse hardware setups.
https://t.co/h4mH1Q4BMR is an efficient, versatile tool for high-speed large language model (LLM) inference, offering multi-device support and extensive quantization options for seamless deployment on diverse hardware setups. #aitools #topaitools … https://t.co/oUIw5ihkZq
Mini-InternVL: A Series of Multimodal Large Language Models (MLLMs) 1B to 4B, Achieving 90% of the Performance with Only 5% of the Parameters https://t.co/8IuNyScZV9 #AI #MachineLearning #MultimodalModels #MiniInternVL #Innovation #ai #news #llm #ml #research #ainews #innovat… https://t.co/eaXe0HmeLA
Mini-InternVL: A Series of Multimodal Large Language Models (MLLMs) 1B to 4B, Achieving 90% of the Performance with Only 5% of the Parameters Researchers from Shanghai AI Laboratory, Tsinghua University, Nanjing University, Fudan University, The Chinese University of Hong Kong,… https://t.co/vCMvrgJ1jf