
Google's Gemini 2.5 Pro and Perplexity's Sonar-Reasoning-Pro-High have emerged as top performers in the newly released Search Arena Leaderboard by LMArena, both securing the first position. The leaderboard, which measures the quality of web-search grounded large language model (LLM) completions, showcases the competitive edge of these models in the AI search domain. LMArena has open-sourced 7,000 battles with user votes and provided a detailed analysis in a blog post. Gemini 2.5 Pro, developed by Google DeepMind, has been praised for its strong performance across various benchmarks and internal tests, particularly in long context understanding. Perplexity's Sonar-Reasoning-Pro-High model demonstrated its prowess by outperforming Gemini 2.5 Pro in head-to-head battles 53% of the time, while other Sonar models also ranked highly, dominating the leaderboard. Perplexity is focused on improving its Sonar models and enhancing its search index.
I'm shocked that many people still aren’t using AI tools. Most people only know about ChatGPT. Here are 12 hidden gems you need to know:⤵️ https://t.co/7uoJiq88T2
16 Essential Generative AI Tools Transforming HR in 2025 HR is being redefined by AI-driven tools for hiring, training, and employee engagement. Here are 16 must-know solutions for 2025. Read more 👉 https://t.co/zrdi7BYgeD #HR #AI #FutureOfWork #BernardMarr
Everyone's hyped about ChatGPT and Gemini. But I’ve been testing DeepSeek quietly... And it’s outperforming them in real-world tasks. Here are 15 ways you can use it right now: https://t.co/YO88ZzCCZk















