AI LLMs Learn Like Us, But Without Abstract Thought A new study reveals that large language models (LLMs) like GPT-J form new words using analogy rather than grammatical rules, echoing how humans process language. When presented with unfamiliar adjectives, the LLM chose noun https://t.co/qDQ7DE3g3u
Large language models use analogies and memories, not rules, to create language like humans do, study finds. https://t.co/xqXp3VcrN3
The promise of Generative AI in customer service is undeniable yet the challenge of hallucinations in Large Language Models (LLMs) continues to hinder safe and scalable adoption. Despite advances in chatbot frameworks, many still rely on brittle flows and manual prompt tuning, https://t.co/QbH0pFVW0B
Recent research from the University of Oxford and the École Polytechnique Fédérale de Lausanne (EPFL) has revealed that large language models (LLMs), such as those powering AI chatbots like ChatGPT, generate language by drawing analogies to stored examples rather than by applying explicit grammatical rules. This approach mirrors human reasoning in language processing. EPFL researchers identified specific units within these AI models that play a critical role in language tasks, akin to the brain's language system. Disabling these units significantly impaired the models' language performance. The findings highlight that LLMs generalize language patterns in a human-like manner, relying on analogy and memory rather than abstract grammatical rules. Despite advances in generative AI, challenges such as hallucinations in LLMs continue to affect their safe and scalable deployment in applications like customer service.