Experts in artificial intelligence have expressed skepticism about the ability of large language models (LLMs) to achieve artificial general intelligence (AGI). While LLMs have shown rapid progress, they are considered by researchers such as Gary Marcus and Richard Sutton to be reaching a point of diminishing returns and a temporary focus in AI development. Sutton emphasized that future breakthroughs are more likely to come from scaling computational power rather than mimicking human cognitive processes. Predictions on the timeline for AGI vary, with some suggesting it could take 5 to 8 years if optimistic, or 10 to 15 years if more conservative, but all agree that major breakthroughs are necessary before AGI can be realized. The conversation highlights a need for diversified approaches in AI research beyond LLMs, which remain far from achieving true general intelligence.
I find it hard to understand both the stance of those who think that what LLMs offer today is very close to Artificial General Intelligence, and that of those who—especially after reading the Apple article—say, “I told you so,” that LLMs are useless.
How is it even news that LLMs won't get us to AGI, reasoning or not. I thought we'd settled this last year.
𝗦𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝘁𝗼 𝘁𝗵𝗶𝗻𝗸 𝘁𝗵𝗮𝘁 𝗺𝗼𝘀𝘁 𝗱𝗼𝗼𝗺𝗲𝗿𝘀 𝗮𝗿𝗲 𝗽𝗼𝘀𝗲𝘂𝗿𝘀. • Few seem to have any intellectual interest in the increasingly likely possibility that LLMs might not yield AGI. • Even fewer seem to consider the fact that their hyping of LLMs only https://t.co/x61OaQ2Uj0