
OpenAI has announced the general availability of its latest iteration, GPT-4 Turbo with Vision, marking a significant update over its predecessors with built-in vision capabilities, allowing for function calling and JSON mode with image inputs. Despite the booming open-source AI market, GPT-4 continues to be a favorite among corporate customers. However, there's a growing sentiment that Claude 3 Opus, the best closed-source model by AnthropicAI, and Command R+, the best open-source model by Cohere, are pushing OpenAI out of its leading position, with queries about "GPT-5 when?". GPT-4 Turbo with Vision has been touted for its improved performance in various domains, including coding, where it reportedly outperforms Claude 3 Opus in most tasks and answers more questions with less "I can't answer that". Developers have noted that GPT-4's performance had been degrading, prompting a switch to alternatives like Claude, which is better at code generation. Nonetheless, OpenAI's new GPT-4 Turbo with Vision has been rolled out in the API and ChatGPT, with developers exploring its potential in applications such as Devin by Cognition Labs. Despite these advancements, some users report that the new GPT-4 Turbo model is underperforming in coding benchmarks compared to its older versions, scoring only 33% on aider’s refactoring benchmark and showing a 4.5 points improvement on LiveCodeBench, suggesting a discrepancy in performance evaluations.















our latest GPT-4-Turbo loves math and reasoning 🧠 https://t.co/2wV3vYEzIf
Seeing some negative takes on the new GPT-4... IMO, we can all keep shaming OpenAI for not being open or too grandiose! However, it's also good to give credit where credit is due. OpenAI deserves praise for this "majorly improved" model 👏👏 Let's measure correctly so we…
New GPT-4-Turbo, who dis? Better or worse at coding?