
Recent research by Amii and other AI experts highlights a critical issue in deep learning known as 'Loss of Plasticity.' This phenomenon occurs when AI models, particularly those based on deep learning, struggle to learn new tasks after being trained on previous ones. The research, published in Nature, suggests that this limitation can lead to catastrophic forgetting, where models fail to retain previously learned information. The study emphasizes the need for innovative solutions to address this challenge, such as techniques to 'wake up' the neurons in AI models to enhance their learning capabilities.
It’s a deep learning problem that was ‘hidden in plain sight:’ A new Nature paper by Amii researchers explores why continual learning models can all of a sudden stop working, and what to do about it: https://t.co/WJ4Rgb8qoe #AI #ContinualLearning #MachineLearning
This week on the Nature Podcast: AIs based on deep learning struggle to keep learning new things, but ‘waking up’ their ‘neurons’ could help overcome this https://t.co/go207N1JxR
When deep learning #AI goes shallow... https://t.co/RIUfZLx5Uw @RichardSSutton @s_dohare @Nature https://t.co/xvAyJ3R6YY @clarelyle @NatureNV Loss of plasticity, catastrophic forgetting can occur with task switching, inability for new learning