🚨 New chat endpoint alert: One OpenAI-compatible API for all major LLMs! 🤯 ✅ OpenAI-compatible → easy to integrate ✅ Supports text & multimodal models ✅ Unified schema for easy switching ✅ Built-in fallback for failover or A/B testing Watch now https://t.co/OAbCFfzLAz
.@OpenAI rolled it out quietly, but I think it's cool that you can now use voice transcription in web-based #ChatGPT! 🤗 (desktop app only previously) https://t.co/HNeohLvXz1
Announcing LiveKit Agents 1.0 and a $45M Series B Back when we launched ChatGPT Voice Mode with OpenAI, voice AI was not a thing. Now it's a whole ecosystem of companies, products, and tools. LiveKit’s infra for building and running voice AI agents is also at scale: over 100K
OpenAI has introduced a voice agent starter kit, featuring a FastAPI backend and Next.js frontend, enabling developers to create custom voice agents. The kit supports push-to-talk audio mode, function calling, streaming, and multi-turn conversations. This launch coincides with LiveKit's announcement of its tools, which power real-time communications, including OpenAI's Voice Mode. LiveKit also announced the release of LiveKit Agents 1.0 and secured $45 million in a Series B funding round. The infrastructure provided by LiveKit is capable of supporting over 100,000 voice AI agents. Additionally, OpenAI has rolled out voice transcription capabilities in its web-based ChatGPT, previously available only in the desktop app.