
Recent research highlights the significant advancements in large language models (LLMs) through metacognition and in-context learning. Two papers have shown that metacognition, which involves self-awareness and self-regulation, enhances both LLM performance and human learning. Specifically, in the domain of mathematical problem-solving, LLMs have demonstrated an ability to label and cluster fine-grained mathematical skills into broader categories. Additionally, a study on in-context learning (ICL) reveals that LLMs utilize a combination of learning from examples and retrieving internal knowledge to solve regression problems. Research from Google DeepMind and Michigan State University further explores these evolving capabilities, including Reinforcement Learning from Prediction Feedback (RLPF) for user summarization.
[CL] Learning vs Retrieval: The Role of In-Context Examples in Regression with LLMs A Nafar, K B Venable, P Kordjamshidi [Michigan State University & Florida Institute for Human and Machine Cognition] (2024) https://t.co/2arW8W5PNM https://t.co/Cdk9elklLX
In-context learning (ICL) in LLMs, while powerful, is not fully understood. This new paper explicitly studies in-context learning in a regression problem and argues that ICL uses a combination of both learning from in-context examples and retrieving internal knowledge. They are… https://t.co/1aIiDWrp2D
[CL] RLPF: Reinforcement Learning from Prediction Feedback for User Summarization with LLMs J Wu, L Ning, L Liu, H Lee… [Google DeepMind] (2024) https://t.co/RqeygrunSQ https://t.co/EhXw8mL4Ru







