
Researchers at Tsinghua University have made significant advancements in improving text embeddings for smaller language models using contrastive fine-tuning. The study, led by T Ukarapol, Z Lee, and A Xin in 2024, demonstrated that this method significantly enhances the performance of models such as MiniCPM, Phi-2, and Gemma. The paper reports an average performance gain of 56.33% on various benchmarks, highlighting the potential for smaller models to achieve higher efficiency in natural language understanding without the extensive resource requirements of larger models.
This paper demonstrates that contrastive fine-tuning significantly enhances text embeddings in smaller language models like MiniCPM, Phi-2, and Gemma, achieving an average performance gain of 56.33% on various benchmarks. https://t.co/9jaI3Eo2Tt
[CL] Improving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuning T Ukarapol, Z Lee, A Xin [Tsinghua University] (2024) https://t.co/SGlg0Q9yRM - This paper aims to improve text embeddings for smaller language models using contrastive fine-tuning. The… https://t.co/YH80hb6fiE
Improving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuning Tsinghua University improves text embeddings in smaller language models (MiniCPM, Phi-2, and Gemma) through contrastive fine-tuning. 📝https://t.co/Yk6X0VQIoo 👨🏽💻https://t.co/uPjZ6wEmiB https://t.co/ycetSKrH2V