Groq Inc has released public access to its API, offering blazing fast inference speeds for AI applications. Users are impressed with the speed and performance of models like GPT4 Turbo, Llama2-50b, Mixtral-32k, and LlaMA2-70b-chat. The API is now available for everyone to use.
🔥Blazing fast inference for your AI apps with @GroqInc API!! 🤯 We’ve added support for GroqAPI to AIConfig! Swap between Groq and other endpoints to see the speed difference yourself 👀 Current available models you can use; ✅LlaMA2-70b-chat (max tokens: 4k)… https://t.co/YSaUatc9fF
📢@GroqInc API access is available to everyone now. Models available: Llama2-70b-4k & Mixtral-32k context window We went ahead and built a Gradio chatbot with Mixtral and used the 32k window to feed the entire SD3 paper from @StabilityAI and asked questions to it. It Went 🚀🚀. https://t.co/C4jYMGbKyh
Looks like the groq is out of the bag. If you're still waiting for API access, it's now self served in their new console 👀 Go play with crazy @GroqInc speed. For inspiration, here's whats possible using Groq mixtral & SDXL lightning, every move here generates a prompt + image https://t.co/eqDtI12JsU https://t.co/qxEGMczH7D