
Supabase has introduced AI inference capabilities in its Edge Functions, now powered by Ollama. This update includes embedding models and large language models (LLMs) like Mistral and llama2, directly within the edge runtime, eliminating cold-boot issues. The new features are designed to enhance embedding generation speed, reduce costs, and support seamless AI inference without cold starts. Supabase's Edge Functions are also equipped with a GPU-powered sidecar to bolster performance. The integration aims to improve local development experiences and deployment times, maintaining its commitment to open-source solutions.
Edge functions get ⚡ Faster embedding generation ⚡ Cheaper embeddings ⚡ No cold starts on AI inference ⚡ Preview on LLMs like Mistral, llama2 via @ollama Along with the usual @supabase goodies ✅ Open source ✅ Fast deploy times ✅ Amazing local DX https://t.co/HsFTlncW0D https://t.co/RWgcOdDYVl
Supabase Edge Functions now has a LLM built right inside it, so you can work with embeddings easier! https://t.co/mPuLrM6VAK
Ollama ❤️ @supabase Now with native AI support in Supabase Edge Functions. Give it a try! https://t.co/mYX14jjL52 (@kiwicopple) https://t.co/zRp6Dvz86j
