
The Mixtral 8x22B, a new open-source base model for large language models (LLMs), has been released and is gaining attention for its performance and versatility. It is not a tuned model and is designed for fine-tuning rather than direct instruction-based prompting. Users are advised to provide examples of the desired behavior for better results. The model has been praised for achieving high scores in IFEval and BBH benchmarks and is available for free use on platforms like OctoAI, which offers a free trial. Additionally, it has been incorporated into the Zephyr 141B model in collaboration with Argilla.io and KAIST AI, enhancing its capabilities for custom model development.
Big day for unexpectedly powerful LLM releases. Microsoft's open source WizardLM 2 (also note that it used synthetic inputs in training, maybe "running out of data" will not be a big deal): https://t.co/EDq935uguF Closed source Reka, which is multimodal: https://t.co/k2B81h9vv0 https://t.co/1Lw4RB2gr3
the ai refining the ai's post-ai training - this space is fascinating... We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes… https://t.co/ITCriqeTsp
Wizard LM May Be The New SOTA Open-Source LLM Sure, there are many new LLMs, but this one deserves a mention as this may be the top open-source model at the moment. WizardLM-2 8x22B may be in the top 5-6 LLMs, only behind Claude and GPT-4 The MT-Bench score of 9.12 is very… https://t.co/1aL6A2uYQo












