OpenAI’s latest model, GPT-5, is drawing sharply different reactions two weeks after its debut. Chief executive Sam Altman told audiences the system was designed around feedback from Indian users, calling India the company’s second-largest and fastest-growing market and predicting it could soon overtake the United States. The model adds broader multilingual support and lower pricing aimed at price-sensitive regions. Inside companies, adoption has accelerated. Industry newsletters and enterprise developers report GPT-5 is being used twice as often for coding and eight times as often for complex reasoning as its predecessor. Tests shared by early users put a 30 million-token run at US$4.75 and estimate the service can be up to 7.5 times cheaper than Anthropic’s Claude Opus 4.1, figures that have helped drive what several trade publications describe as a ‘significant surge’ in enterprise demand. Consumer and research circles are less enthusiastic. The Financial Times and Washington Post note the model’s mixed benchmark scores and complaints about reduced conversational quality, fuelling debate over whether large-language-model scaling is hitting diminishing returns. Altman has acknowledged the launch was mishandled and says his team is rolling out updates to address tone and reliability issues. Looking ahead, the OpenAI chief says the company will need “trillions of dollars” in additional data-centre investment to sustain future iterations, underscoring both the capital intensity of frontier AI and the stakes riding on GPT-5’s commercial performance.
increasingly surprised they would even put "GPT-5" on the name of this new model, given the expectations people had for it just for regular non-coding queries, i've noticed virtually zero improvement, and seemingly, some regression. like it often seems to lose context of our
AI Art & Ethics: The Unseen Battles Shaping Our Future 🚀 Innovation in AI Art AI art tools like Stable Diffusion 3.5 are evolving rapidly, with 1.8x faster generation speeds and enterprise-level precision. Yet, as @ClaireSilver12 points out, accessibility is dwindling.
GPT-5 was about a lot of things. But (despite 2+ years of hype) …. it had nothing to do with AGI. https://t.co/sLPoB3H84s