
The AI community is abuzz with the latest advancements in the Llama-3 AI model, which has seen significant updates and integrations across various platforms and applications. The newly released Llama-3 version 4.9, developed by Sider AI, incorporates Meta's most advanced AI model to date, enhancing its capabilities on platforms like Chrome and Edge through extensions. Additionally, the model has been optimized for performance on a single 4GB GPU, making it accessible for broader use in AI research and development. Innovations include the ORPO Colab, which simplifies the fine-tuning process by combining SFT and DPO into one step, and the introduction of GGUF-format weights for Llama-3-8B, supporting deployment on multiple platforms. These developments are expected to push the boundaries of what's possible in AI, particularly in fields like machine learning, computer vision, and robotics. The ORPO Colab also makes finetuning 2x faster, uses 80% less VRAM, and supports 4x longer contexts.

















Boom! The open source local LLama-3 8B with a context length of over 1M is a massive game changer for local AI on your devices. The testing I have done is astonishing and took a large code base to optimize and it was brilliant. More soon. Link: https://t.co/uI0zm0Se7W
We've been in the kitchen cooking 🔥 Excited to release the first @AIatMeta LLama-3 8B with a context length of over 1M on @huggingface - coming off of the 160K context length model we released on Friday! A huge thank you to @CrusoeEnergy for sponsoring the compute. Let us know… https://t.co/iZ9zcKzOc6
We've been in the kitchen cooking 🔥 Excited to release the first @AIatMeta LLama-3 8B with a context length of over 1M on @huggingface - coming off of the 160K context length model we released on Friday! A huge thank you to @CrusoeEnergy for sponsoring the compute and let us… https://t.co/rcOtLdPnij