🚀 New model alert! Introducing "magnum-32b-v1-i1"! This model is designed to replicate prose quality like Claude 3's Sonnet and Opus, fine-tuned on Qwen1.5 32B. Install with: `local-ai run magnum-32b-v1-i1` 🚀🔥 #LocalAI #NewModel
🤖 You can now chat with our AI Gateway repo, powered by Llama 3.1 405B using @huggingface assistants! Ask questions about the repo, Get code snippets to use any LLM, Streamline your AI integrations! Check it out and start building! 👉🏼 https://t.co/1njUEOYGSC https://t.co/p5f794xZ16
Chat with LLMs to build GenAI apps using AWS Studio without writing a single line of Python Code. Go from a simple English prompt to fully working AI application in just 2 minutes. https://t.co/dqv7pSBO8o
LocalAI has introduced several new language models, including 'openbuddy-llama3.1-8b-v22.1-131k', 'lumimaid-v0.2-8b', and 'magnum-32b-v1-i1'. These models offer advanced capabilities in multilingual chatbot functionality and high-quality prose generation. The 'openbuddy-llama3.1-8b-v22.1-131k' model can be installed using the command `local-ai run openbuddy-llama3.1-8b-v22.1-131k`, while 'lumimaid-v0.2-8b' is based on Meta-Llama-3.1-8B-Instruct and can be installed with `local-ai run lumimaid-v0.2-8b`. Additionally, 'magnum-32b-v1-i1' is designed to replicate the quality of Claude 3's Sonnet and Opus, fine-tuned on Qwen1.5 32B, and can be installed using `local-ai run magnum-32b-v1-i1`. Neurai has also updated its webchat models to include Llama 3.1, which has similar potential to ChatGPT 4. Furthermore, PortkeyAI's AI Gateway repo now supports Llama 3.1 405B, enabling users to streamline AI integrations using Huggingface assistants. These updates were announced between July 27 and July 29.