Pretrained language models (LMs) are usually fine-tuned to adapt them to new domains or tasks, a process known as micro-tuning. While refinement allows adaptation to various functions with small amounts of in-domain data, it can be prohibitively expensive for large...
Stanford researchers propose a family of representation adaptation (ReFT) methods that operate on a frozen base model and learn task-specific interventions on hidden representations
read more