Recent research by the Beijing Academy of Artificial Intelligence, including C Li, M Qin, S Xiao, and J Chen, has introduced a text embedding model that leverages large language models' (LLMs) in-context learning capabilities to generate high-quality, adaptable embeddings. This model demonstrates remarkable effectiveness in handling both familiar and novel tasks. Additionally, a comprehensive survey of small language models (SLMs), ranging from 100 million to 5 billion parameters, has been conducted. This survey analyzes 59 state-of-the-art open-source SLMs, evaluating their capabilities in reasoning, in-context learning, mathematics, and coding. The research provides insights into the performance and innovations of these models across various architectures, training datasets, and algorithms.
🏷️:Making Text Embedders Few-Shot Learners 🔗:https://t.co/YT76lcIcZp https://t.co/ks0vQfpRJq
[IR] Making Text Embedders Few-Shot Learners C Li, M Qin, S Xiao, J Chen... [Beijing Academy of Artificial Intelligence] (2024) https://t.co/ZOnyfeP0Hf https://t.co/Xuj60yxJBz
Small Language Models Great survey on small language models (SLMs) across architectures, training datasets, and training algorithms. Analyzes 59 state-of-the-art open-source SLMs and capabilities such as reasoning, in-context learning, maths, and coding. Other discussions… https://t.co/VmANsr7X9F