
ReFT (Representation Finetuning for Language Models) is a new method that is 10x-50x more parameter-efficient than prior methods. It allows for instruct-tuning large language models in under 20 minutes with a single GPU, producing model artifacts less than 1MB. ReFT focuses on adapting representations rather than modifying weights.
MASSIVE Paper: "ReFT: Representation Finetuning for Language Models" 🔥 📌 10x-50x more parameter-efficient than prior state-of-the-art PEFT methods. 📌 A hallmark of current state-of-the-art PEFTs is that they modify weights rather than representations. However, much prior… https://t.co/N6GZ8I73l8
[CL] ReFT: Representation Finetuning for Language Models Z Wu, A Arora, Z Wang, A Geiger… [Stanford University] (2024) https://t.co/UedrQYSDOP - Pretrained language models (LMs) are commonly finetuned to adapt them to new domains or tasks. However, this is computationally… https://t.co/Ln4DFwvC60
ReFT Representation Finetuning for Language Models Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, https://t.co/NOEy0ejmQ6


