Qwen3-Embedding is another league! It produces the best results I have seen from and Open model. I tested 0.6 and is more than good enough already! https://t.co/WX5P082HEM
Let's go! 😍 @Alibaba_Qwen just released Qwen3-Embedding, a new series of embedding models: 🏆 SOTA performance on MMTEB, MTEB, and MTEB-Code 📏 Three different sizes (0.6B / 4B / 8B) 🌍 Multilingual (119 languages) 💻 Can run in-browser w/ Transformers.js (+ WebGPU acceleration) https://t.co/AZYNStjRaH
I've just integrated the new benchmark-topping Qwen3 Embedding models now in Sentence Transformers, and thus also @LangChainAI, @llama_index, @deepset_ai Haystack, and more! Details in 🧵 https://t.co/PLHVCpOcVY
Alibaba's Qwen team has launched the Qwen3-Embedding and Qwen3-Reranker series, advancing multilingual text embedding and reranking through a multi-stage training pipeline. These models support 119 languages and have achieved state-of-the-art performance on benchmarks such as MTEB, MMTEB, and MTEB-Code, outperforming leading models like Gemini. The Qwen3-Embedding series is available in three sizes—0.6 billion, 4 billion, and 8 billion parameters—and can run in-browser with Transformers.js and WebGPU acceleration. Integration of Qwen3-Embedding models has already been implemented in popular frameworks including Sentence Transformers, LangChain, Llama Index, and Haystack. Early tests indicate that even the smallest 0.6 billion parameter model delivers highly competitive results for an open model.