Perplexity Pro is Now Powered by Cerebras. @perplexity_ai Sonar, now running on Cerebras Inference, delivers answers at an unprecedented 1,200 tokens/s – 10x faster than comparable models. https://t.co/LOj8gxGvJX
Artificial intelligence just got a serious upgrade with Perplexity's lightning-fast new Sonar model. In a groundbreaking leap for search technology, Perplexity has unveiled Sonar, an AI model that promises to revolutionize how we retrieve and consume information online. Built… https://t.co/wekoNdQWHW
Cerebras is excited to power @perplexity_ai! We're waitlist only, but if you want to try out the fastest inference in the world, reply and I'll set you up with a free @CerebrasSystems API key https://t.co/y3JsjUWiCR
Perplexity AI has introduced its new AI model, Sonar, which is built on the Llama 3.3 architecture with 70 billion parameters. The model reportedly outperforms existing models such as GPT-4o-mini and Claude 3.5 Haiku in user satisfaction while matching or exceeding the performance of top models like GPT-4o and Claude 3.5 Sonnet. Sonar is optimized for both speed and answer quality, achieving an impressive processing rate of 1,200 tokens per second, which is ten times faster than comparable models. This advancement is powered by Cerebras Systems' inference technology, marking a significant enhancement in AI-driven search technology. Perplexity AI's retention metrics indicate positive user reception of the new model, which is expected to transform how information is retrieved and consumed online.