Liquid AI, a startup founded by former Massachusetts Institute of Technology researchers, has released LEAP, a software-development kit that lets programmers embed small language models directly in mobile apps. The company also launched Apollo, a companion iOS application for testing models locally, underscoring its ‘local-first’ push to cut reliance on cloud infrastructure and improve privacy. The LEAP SDK supports iOS and Android and comes with a library of compact models starting at 300 MB, light enough to run on phones with 4 GB of RAM. Developers can integrate a model with “fewer than 30 lines of code,” according to Liquid AI, which positions the platform as an alternative to cloud-only AI services whose latency and data-handling costs have become sticking points for app makers. LEAP is built around the company’s second-generation Liquid Foundation Models (LFM2), offered in 350-million, 700-million and 1.2-billion-parameter versions optimised for CPUs, GPUs and neural-processing units found in consumer devices. The SDK is free under a developer licence, while a paid enterprise edition is slated for future release. “Developers are moving beyond cloud-only AI and looking for trusted partners to build on-device,” chief executive Ramin Hasani said in announcing the release. The launch comes amid heightened industry interest in edge AI as companies seek to lower bandwidth costs, meet data-sovereignty rules and deliver faster, offline-capable applications.
AI is no longer just about language models or predictions. We’re entering the era of Autonomous AI Agents, and with it comes a new kind of risk. Let’s talk about why your current AI governance won’t be enough.👇
Finally, a dev kit for designing on-device, mobile AI apps is here: Liquid AI's LEAP https://t.co/y5k2FeZvyJ
Anybody can run cloud LLMs — that's the past. Now with LEAP 🐸, you don’t need the cloud — just tap. No lag, no limits, no looking back. https://t.co/W97K6BcbS3