Ollama has announced a significant update that allows users to run any GGUF model directly from the Hugging Face Hub. This feature, which has been highly requested by the community, enables access to approximately 45,000 GGUF repositories without requiring any modifications. Users can simply point to the desired GGUF model on the Hugging Face Hub and execute it using Ollama. This update is particularly beneficial for local LLM enthusiasts, as it facilitates the use of models optimized for edge applications, including those from MistralAI and others. The ease of access to these models is expected to enhance local machine learning capabilities for users.
Seeing this team ship new features is like 🤯 You can now run *any* of the 45K GGUF on the Hugging Face Hub directly with Ollama 🤗 https://t.co/GyP0detpsn https://t.co/jChFE7Q9x1
Big day for local LLM fans! - @MistralAI released new models optimized for edge use cases (3B and 8B): https://t.co/k9LTWBtcHs - You can now run any GGUF model from @huggingface on your laptop using Ollama (including VLMs supported by llama cpp!): https://t.co/DEoGkBejX2
Big Update for Local LLMs! Excited to share that you can now easily use any GGUF model on @huggingface directly with @ollama! Just point to the Hugging Face repository and run it! Here is how to run @AIatMeta Llama 3.2 3B! 😍 1. Find your GGUF weights on the hub, e.g. Llama 3.2… https://t.co/yv9RleM7yx