Cool idea in this Paper from @Apple Researchers. Claims that AdamW requires 95% more training tokens (ie, 1.95x as many gradient updates) than their proposed optimizer to reach the same loss.🤯 1.3B parameter AdEMAMix LLM trained on 101B tokens performs comparably to AdamW… https://t.co/pJm4o6Qj2E
The AdEMAMix Optimizer: Better, Faster, Older. https://t.co/c4PHuLvGXd
The AdEMAMix Optimizer: Better, Faster, Older https://t.co/uDZX5heuhg
Apple has introduced the AdEMAMix Optimizer, a novel Adam-based optimizer that leverages very old gradients to achieve better solutions. This optimizer has been tested on models such as Transformer LM, Mamba LM, and Vision Transformer (ViT) training. Notably, the 1.3 billion parameter AdEMAMix Transformer language model (LM) trained on 101 billion tokens performs comparably to the AdamW optimizer but requires 95% fewer training tokens to reach the same loss.