OpenAI (@OpenAI) introduced o1-Pro model designed for advanced reasoning. Pricing: - Input: $150/million tokens - Output: $600/million tokens It's 10x the price of o1 and 100x GPT-3.5 (o3-mini), meant for specialized use. o1-Pro is slower, more costly, and deliberate, https://t.co/EQtw6c6EfK
DeepSeek Helps Large Cloud Customers Slow Down AI Spending AI spending is shifting! 🔄 Some large enterprises are slowing down due to cheaper AI models, with Palo Alto Networks reporting 95% cost savings using DeepSeek. More: https://t.co/ojDDegHCfx
Hugging Face's AI Researcher claims Deep Seek's open-source model could democratize AI by allowing training on less advanced Nvidia chips—challenging the status quo of power concentration in AI development. https://t.co/o3B2fVy54N
A new Chinese AI model has emerged, reportedly outperforming OpenAI's latest models, including O3, in reasoning capabilities while offering a significantly lower cost of between $0.14 and $0.55 per million tokens. In contrast, OpenAI's O3 model is projected to be much more expensive, with estimates suggesting a cost of around $30,000 per complex task, a substantial increase from the initial estimate of $3,000. This disparity highlights the growing competition in the AI sector, particularly as Chinese startups like DeepSeek are able to reduce costs by training models with less data and fewer iterations. Some enterprises are beginning to slow their AI spending in light of these cheaper alternatives, with reports indicating that companies like Palo Alto Networks have achieved up to 95% cost savings using DeepSeek's models. The ongoing price war in AI development raises questions about the future of AI spending and the accessibility of advanced AI technologies.