
The Mixture-of-Agents (MoA) framework enhances the capabilities of large language models (LLMs) by leveraging the collective strengths of multiple open-source models. This layered architecture allows each agent to refine responses using outputs from the preceding layer. Together MoA has achieved a score of 65.1% on the AlpacaEval 2.0 benchmark, surpassing the previous leader, GPT-4 Omni, which scored 57.5%. Additionally, MoA has demonstrated superior performance on other benchmarks such as MT-Bench and FLASK. The framework is noted for its cost efficiency and ability to outperform proprietary models using only open-source LLMs, achieving a 7% higher score on AlpacaEval2.



Mixture of Agents is a new technique that helps you combine and leverage the strengths of multiple open source LLMs and match GPT-4 performance Good to see open AI research alive snd well even after the closed labs have tried to make a serious attempt at killing it, lately!
Combining Different Strengths To Harness The Collective Power of LLMs! The power of open source is that the community invents different techniques to build on top of each other! A mixture of Agents (MoA) is a novel approach that adopts a layered architecture, with each layer… https://t.co/EhAyKEAL6Q
New Mixture of agents architecture lets open source bear gpt4o as covered by @MatthewBerman https://t.co/RFCfrevfPu