LiquidAI has launched LFM-7B, a new language model designed for multilingual capabilities in English, Arabic, and Japanese. The model is built on the Liquid foundation modeling framework, which allows for lightweight inference and high performance. LFM-7B is noted for its exceptional multilingual chat capabilities, particularly in Arabic and Japanese, and is described as the best post-training model to date by the development team. It features strong evaluations across diverse tasks, long context strength with low memory costs, and options for edge-device and on-premises deployment. Additionally, LFM-7B utilizes a non-transformer architecture, achieving the highest Elo score among models in the 7-8 billion parameter range while maintaining fewer parameters.
New LLM by @LiquidAI_ (MIT spinoff) LFM-7B uses a non-transformer architecture and got the highest Elo score among 7-8B models while using fewer parameters. It's also multilingual. Anyone has knowledge on non-transformer architectures? It's interesting for me, curious about the… https://t.co/hJqnZWTaSK
From code to conversation: Localized AI fun with LM Studio and the ellmer package https://t.co/EpZ7vvtkXS #analytics #datascience, #datascience #ds, #machinelearning, #nlp, #textanalytics, inoreader
Excited to share LFM-7B! Incredibly proud of the strong work of our @LiquidAI_ team. https://t.co/UPfn0x0Clf