Yann LeCun, Meta's chief AI scientist, has stated that achieving human-level artificial intelligence (AI) will not be possible merely by scaling up large language models (LLMs). He emphasized that while scaling LLMs can enhance their memory and retrieval capabilities, it will not lead to the creation of systems akin to human intelligence. LeCun described the notion of developing a 'country of genius' within data centers as unrealistic, asserting that LLMs function primarily as advanced autocomplete tools rather than true creators. This perspective aligns with recent critiques from other AI researchers, who have also expressed skepticism regarding the logical reasoning capabilities of LLMs, suggesting that their performance may not stem from complex reasoning processes. The ongoing discourse highlights a growing consensus that while LLMs have significant utility, they are not the pathway to achieving artificial general intelligence (AGI).
LeCun: "we will not get to human-level AI by just scaling up LLMs... it's not going to happen" It's funny how they all suddenly either shut up about AGI or started to say that LLMs were a dead end on this path to AGI. LeCun is probably the most reasonable of them, but even he,
It's funny how they all suddenly either shut up about AGI or started to say that LLMs were a dead end on this path to AGI. LeCun is probably the most reasonable of them, but even he, when needed for Llama promotion, filtered what to say to large audiences. Today, when Llama 4
Everyone builds the thing they want with AI, but they won't become great companies—they’re just things anyone else could easily replace. It's getting heated into today's @MoreorLessPod episode with @lessin, @brit, @Jessicalessin, and @davemorin https://t.co/W1VAHzlpaO