OpenAI Chief Executive Officer Sam Altman said on Aug. 12 that the company will roughly double its compute fleet over the next five months to meet surging demand sparked by the launch of GPT-5. The expanded capacity will first ensure that paying ChatGPT subscribers receive more total usage than they did before the latest model’s debut. Once subscriber needs are met, OpenAI will allocate remaining resources to its application-programming-interface customers and is diverting some compute previously reserved for research toward product operations. The move is intended to ease recent service throttling and could translate into additional orders for high-performance chip suppliers such as Nvidia.
The new ChatGPT Go Plan has been added to the pricing page in ChatGPT web app "Only available in certain regions" at ₹399 INR / month with everything in Free plus expanded messaging and uploads, expanded image creation, limited deep research, longer memory and context, and https://t.co/KmjsqWJYZG
OpenAI is optimizing to be a billion user consumer product over being a developer platform. https://t.co/gfBQknGWfX
OH MY BULLISH $NVDA SAM ALTMAN: "We are ~doubling our compute fleet over the next 5 months (!) so this situation should get better." goodness... https://t.co/g5VVcjF7bL