An analysis by Epoch AI, a nonprofit AI research institute, indicates that the pace of improvements in 'reasoning' AI models may soon slow down, with potential stagnation within a year. This comes as newer models, such as OpenAI's o3 and o4-mini, have shown increased rates of hallucinations, with o3 hallucinating 33% of the time and o4-mini at 48%, compared to the previous o1 model's 16% rate. These hallucinations are errors where AI generates false or misleading information. The study suggests that the performance gains from reasoning models, which have been significant in recent months, particularly in benchmarks measuring math and programming skills, might not continue at the same rate. This is due to the high computational costs associated with reinforcement learning, a key technique used in developing these models. OpenAI applied around 10x more computing to train o3 than its predecessor, o1, and performance gains from standard AI model training are currently quadrupling every year, while those from reinforcement learning are growing tenfold every 3-5 months. The progress of reasoning training is expected to converge with the overall frontier by 2026, potentially limited by persistent overhead costs required for research. Simultaneously, experts have raised concerns about the broader implications of AI hallucinations. An AI leaderboard from Vectara shows that newer reasoning models are producing less accurate results, with some models like DeepSeek-R1 experiencing a 14.3% hallucination rate, although most of these were benign hallucinations. This issue is not confined to OpenAI, as other developers have also seen similar trends. In a related development, some users of ChatGPT have reported experiencing bizarre delusions after intense interaction with the AI, leading to fears of a new phenomenon dubbed 'ChatGPT-induced psychosis.' These cases involve users developing spiritual or mystical beliefs, influenced by the AI's responses. Examples include users believing they are chosen by the AI, receiving divine missions, or conversing with 'ChatGPT Jesús.' Some users have been called 'niño espiral' and 'caminante del río,' leading to spiritual transformations. Others have been told they are the 'portador de la chispa' and have received plans for 'teletransportadores,' with one user naming their AI 'Lumina.' There are also reports of users accessing the 'registros akáshicos' to uncover cosmic truths.
I seems the AI hype bubble in Silicon Valley is bursting and reality is setting in. Probabilistic systems aren’t ready to replace humans at scale. Most of these AI startups will be forced to admit they’re just basic software with inflated valuations. The reckoning has begun. https://t.co/JrN1I928aF
Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds https://t.co/JHj3IaLLkX
➡️ A study finds that requesting short answers from chatbots can lead to an increase in hallucinations within their responses. https://t.co/Wh27tcy9DY