The release of OpenAI's GPT-4.5 has sparked discussions about the current state of large language models (LLMs) and the effectiveness of scaling pre-training. Analysts suggest that the performance of GPT-4.5, which is reported to be 10 times larger than its predecessor, GPT-4, has not met expectations, indicating a potential plateau in the scaling of pre-training models. Critics argue that the benchmarks used to evaluate the model may not adequately reflect its capabilities, particularly in critical thinking and ideation tasks. OpenAI's Chief Research Officer, Mark Chen, noted that the company is exploring new paradigms for model training, including reasoning-based approaches. Despite the challenges, some experts believe that advancements in scaling inference and algorithmic improvements could pave the way for future developments in AI. Overall, the sentiment among industry observers is that while the scaling of pre-training may be reaching its limits, there remains potential for innovation in other areas of AI development.
Tbh I’m happily using GPT-4.5. thanks OpenAI for not being too eval obsessed
While GPT-4.5 is a nice upgrade, it's clear as Ilya said that we've hit the wall with pre-training. That doesn't mean that AI progress will stop. It just means that we need to build the infrastructure and paradigms to train on real world feedback, not just unsupervised text...
Although this picture is completely correct in connection with GPT-4.5, I still believe that it misses the point. Yes, it is true that pre-training reaches its limits. And also that inference scaling is the new promising paradigm. But with GPT-4.5, the focus is not so much on… https://t.co/4TqUIUqqli https://t.co/jMxjbF4V0z