Llama 3.3 70B is now available in #OpenLLM! It offers improved performance over Llama 3.1 70B and rivals Llama 3.2 90B for text-only applications. In some cases, it even approaches Llama 3.1 405B! 🦙 Try it now: `openllm serve llama3.3:70b` #OpenLLM #OpenSource #LLM #BentoML
🔥Introducing "dolphin3.0-llama3.2-1b"! A new LocalAI model, designed for ultimate general purpose use cases (coding, math, agentic, function calls). Get it by running "local-ai run dolphin3.0-llama3.2-1b" #LocalAI #AIModel #GeneralPurpose
🚀 New model alert! Introducing Dolphin3.0-llama3.1-8b, a general-purpose model for coding, math, and more! Check it out in LocalAI by running: `local-ai run dolphin3.0-llama3.1-8b` 🔥 #LocalAI #Dolphin3.0 #AIModel
Several new AI models have been released recently, enhancing the capabilities available for local deployment. Dolphin 3.0 has been introduced, featuring versions Llama 3.1 and Llama 3.2, which are noted for their robust performance. Additionally, LocalAI has launched multiple models, including '32b-qwen2.5-kunou-v1' and '14b-qwen2.5-kunou-v1', both designed for generalist roleplay applications. Another model, Triangulum-10B, has been released, focusing on multilingual generative tasks and complex reasoning. Furthermore, the Llama 3.3 70B model is now available through OpenLLM, promising improved performance over its predecessors, Llama 3.1 70B and Llama 3.2 90B. These developments reflect a growing trend towards local AI solutions that offer users greater control and versatility.