Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal¶
Arxiv Link - 2024-05-25 12:17:29
Abstract¶
Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model's ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent. To address this challenge, we propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal. Concretely, we first employ the base LLM for in-context learning to generate synthetic instances. Subsequently, we utilize the latest LLM to refine the instance outputs based on the synthetic inputs, preserving its acquired ability. Finally, we select diverse high-quality synthetic instances for rehearsal in future stages. Experimental results demonstrate that SSR achieves superior or comparable performance compared to conventional rehearsal-based approaches while being more data-efficient. Besides, SSR effectively preserves the generalization capabilities of LLMs in general domains.
Socials¶
X | |
---|---|
🚀 Exciting News in AI Research 🚀 Large language models (LLMs) face a significant challenge in continual learning due to catastrophic forgetting. Conventional methods rely on previous training data for rehearsal, which may not be practical in real-world scenarios. Our latest research introduces a cutting-edge framework, Self-Synthesized Rehearsal (SSR), to tackle this issue. 🔍 SSR leverages LLMs to generate synthetic instances for rehearsal, enabling continual learning without access to original training data. By utilizing the base LLM for in-context learning to create synthetic instances and refining them with the latest LLM, SSR maintains and enhances the model's acquired abilities. Moreover, diverse high-quality synthetic instances are selected for future rehearsal, ensuring data-efficient performance. 📊 Experimental results showcase that SSR outperforms traditional rehearsal-based methods while preserving the generalization capabilities of LLMs in various domains. Learn more about our innovative SSR framework in our research paper: http://arxiv.org/abs/2403.01244v2 #AI #LLM #ContinualLearning #Research #TechInnovation #NLP #SSR #ArtificialIntelligence #DataEfficiency |
🚀 Exciting research alert! Addressing catastrophic forgetting in Large Language Models (LLMs) during continual learning, a new framework called Self-Synthesized Rehearsal (SSR) has been proposed. SSR generates synthetic instances for rehearsal, achieving superior performance while being more data-efficient. Check out the results here: http://arxiv.org/abs/2403.01244v2 #AI #NLP #LLM #Research #TechInnovation 🤖📚 |