LM Studio has released version 0.3.0, introducing several new features aimed at enhancing local AI model deployment. The update includes a built-in chat feature with documents that operates 100% offline, and a 'Structured Outputs' API that functions with any local model, similar to OpenAI's offerings. The user interface has been completely revamped, offering dark, light, and sepia themes. LM Studio also supports the Llama-3.1-8B GGUF model with a 32k context length, allowing users to run open-source large language models (LLMs) like Llama 3.1 and Mistral locally without needing to write Python code. This update is available for Mac and other platforms, emphasizing free and offline operations. Additionally, the release integrates Retrieval-Augmented Generation (RAG) capabilities with open-source models.
Run opensource LLMs like Llama 3.1 or Mistral with built-in RAG locally on your computer and create OpenAI like API (100% free and offline). Plus, load & serve multiple LLMs on the local network. https://t.co/wz53qtTvVm
Run Llama 3.1 on your phone by hosting it locally on your computer using LM Studio (100% free and offline) https://t.co/H2zXgTS9io
Learn how to build and optimize multi-agent systems with #Llama #Agents, #Milvus and @MistralAI. Read below ๐ #RAG #LLM