
The MLX community is witnessing significant advancements with the introduction of new tools and capabilities for leveraging local large language models (LLMs) across various platforms. Notably, a new repository named mlx-swift-chat has been launched, enabling multi-platform SwiftUI applications to test LLMs with MLX Swift. This tool facilitates the direct download of models from the 🤗 hub and supports Mistral, Phi-2, Llama, Gemma style models with an easy setup process. Additionally, the Quantized StarCoder2 model variants are now accessible, complete with a small guide for running and training StarCoder2 locally, including commands like 'pip install -U mlx-lm' and 'python -m mlx_lm.generate --model'. Users can now run Starcoder-2 models locally on Mac M1 Pro Apple Silicon with 16GB memory, enhancing the user experience with extensions like Twinny for various functions including FIM and chat, with the dev tools showing the prompt being sent to the ollama server. Moreover, MLX-Assistant has been introduced, allowing users to edit text with AI on a Mac using shortcuts for various prompts, such as 'summarize' and 'improve'. This application of MLX to edit, summarize, and style text on devices highlights the untapped potential of LLMs beyond traditional chat interfaces.





This is an exceptionally cool and creative application of MLX to edit/summarize/style any text on your device. Code: https://t.co/bUCY7jkxdt Nice to see some exploration of LLMS beyond the more common chat interface. Seems like a lot of untapped potential there. https://t.co/LnReaIVcDm
What if you could edit your text with AI on a Mac everywhere, locally and instantly? Introducing MLX-Assistant 🚀 Select any text and just edit using shortcuts. You can even create your own prompts, like 'summarize', 'improve', etc. Works with any model supported on MLX 👇 https://t.co/W3PykJ0xDs
One of things I've been using local LLMs for is Formatting text. @LMStudioAI make the output really easy to digest