
The MTEB Arena, the first-ever platform dedicated to evaluating embedding models, has been launched. This innovative initiative allows users to dynamically test and vote on the quality of various embedding models, which have shown surprising variance in MTEB scores. The arena features 15 models from prominent organizations, including OpenAI, Google, Cohere, Voyage AI, Jina AI, SF Research, and Nomic AI, competing in three tasks: retrieval, clustering, and semantic textual similarity (STS). The project is a collaborative effort led by Muennighoff, with significant contributions from Vaibhav Adlakha and Siva Reddy, and is supported by sponsorships from organizations such as ServiceNow Research. This new platform aims to enhance the evaluation process in the field of embeddings and promote open science.
Lots of great open embedding models, let's figure out the best one! https://t.co/9FstcFKhDu
We’re thrilled to support the MTEB community with this new Arena for Embedding Models. Congratulations to everyone involved, and thank you @sivareddyg and @vaibhav_adlakha for your contributions. Open Science for the win! https://t.co/iSZyUCyVPq
Do you want to dynamically test the quality of embeddings? Introducing MTEB Arena, where the best of the best wins :). Please vote on the best models. Incredible effort by @Muennighoff and the MTEB team. Thanks to sponsorships for open science by many including @ServiceNowRSRCH https://t.co/Kdgz7twyje