DeepSeek R1, an advanced AI model, has been successfully integrated into various platforms, showcasing its capabilities in coding, mathematical reasoning, and complex problem-solving. The model, a fine-tuned version of Llama 3.3 70B, has achieved notable performance metrics, including a 94.5% accuracy on the MATH-500 benchmark and 86.7% on the AIME 2024 exam. It is now available for local use on systems running Mac, Windows, and Linux through Ollama, and is also hosted on Cloudflare Workers AI, where it can tackle math and coding tasks. The model's integration into GroqCloud™ and other platforms, such as The Origin AI and Novita AI, further highlights its versatility. Users have reported that running DeepSeek on Groq is exceptionally fast, with capabilities likened to coding at the speed of thought. The model is open-source and can be accessed through various AI playgrounds, making it accessible for developers and researchers alike.
Are you ready for Deepseek R1 running locally in your freaking browser at 18 tps? Here is a quick run on DeepSeek-R1-Distill-Qwen-1.5B-ONNX in the browser using Surya @sdand's script It uses @huggingface transformers.js and webgpu Links to repo etc are in comments https://t.co/Lj6fs3Y896
I added DeepSeek to our AI hedge fund. All you need is a @GroqInc API key This makes our hedge fund: • more efficient • more affordable • lightning fast ⚡️ I am very excited to use DeepSeek moving forward. Let me know what other LLMs to add. https://t.co/uMTCsaWyui
🚀 DeepSeek R1 Distill Llama 70B is now LIVE on Novita AI Bringing you that sweet ✨32K context + 32K max output✨ Check it out: https://t.co/T7QuyccdyF https://t.co/Icwg3WBY2a