
The tech and AI research community is abuzz with the announcement of the new flagship 7B model in the Hermes series, named Hermes 2-Mistral-7B DPO. This model, developed by Nous Research and sponsored by @fluidstackio, represents a significant advancement in machine learning capabilities. It was developed through a process known as DPO (Direct Programming Optimization) from the OpenHermes 2.5 framework and has shown remarkable improvements across a variety of benchmarks, including AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA. Apple has released MLX versions of this model, and Teknium introduced its version, the Nous-Hermes 2 Mistral 7B DPO, highlighting a collaborative effort in the AI research field. Additionally, the model is available for download on Shoggoth and has been listed on the Together API, indicating its widespread adoption and utility.
Together has listed Hermes 2 7B DPO on their API now: https://t.co/IwbR6z68S8 https://t.co/KcckzjeQ8G
Download Nous Hermes 2 Mistral 7B DPO (Q4_K_M) on Shoggoth: https://t.co/thmCHZDCdr model by Nous Research (@NousResearch) https://t.co/VrVDeIWCPN
Introducing our DPO'd version of the original OpenHermes 2.5 7B model - Nous-Hermes 2 Mistral 7B DPO! This model improved significantly on AGIEval, BigBench, GPT4All, and TruthfulQA compared to the original Hermes model, and is our new flagship 7B model! We at Nous are finding… https://t.co/OcdVwbuDbG https://t.co/oYa7K5YkuX




