Skip to content

Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data

Arxiv Link - 2024-05-23 06:14:35

Abstract

As large language models (LLMs) demonstrate unparalleled performance and generalization ability, LLMs are widely used and integrated into various applications. When it comes to sensitive domains, as commonly described in federated learning scenarios, directly using external LLMs on private data is strictly prohibited by stringent data security and privacy regulations. For local clients, the utilization of LLMs to improve the domain-specific small language models (SLMs), characterized by limited computational resources and domain-specific data, has attracted considerable research attention. By observing that LLMs can empower domain-specific SLMs, existing methods predominantly concentrate on leveraging the public data or LLMs to generate more data to transfer knowledge from LLMs to SLMs. However, due to the discrepancies between LLMs' generated data and clients' domain-specific data, these methods cannot yield substantial improvements in the domain-specific tasks. In this paper, we introduce a Federated Domain-specific Knowledge Transfer (FDKT) framework, which enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy. The core insight is to leverage LLMs to augment data based on domain-specific few-shot demonstrations, which are synthesized from private domain data using differential privacy. Such synthetic samples share similar data distribution with clients' private data and allow the server LLM to generate particular knowledge to improve clients' SLMs. The extensive experimental results demonstrate that the proposed FDKT framework consistently and greatly improves SLMs' task performance by around 5\% with a privacy budget of less than 10, compared to local training on private data.

Socials

LinkedIn X
🚀 Exciting developments in AI and privacy protection! 🛡️ Our latest research introduces the Federated Domain-specific Knowledge Transfer (FDKT) framework, enabling secure knowledge transfer from large language models (LLMs) to small language models (SLMs) while safeguarding clients' data privacy. By leveraging LLMs to augment data based on domain-specific few-shot demonstrations synthesized from private domain data using differential privacy, FDKT significantly boosts SLMs' task performance by approximately 5% with a privacy budget of less than 10. Learn more about this cutting-edge approach here: http://arxiv.org/abs/2405.14212v1 #AI #PrivacyProtection #FDKT #LLMs #SLMs #Innovation 🌟 🚀 Exciting new research on Federated Domain-specific Knowledge Transfer (FDKT) framework for enhancing small language models (SLMs) using large language models (LLMs) while ensuring data privacy. The FDKT framework yields significant performance improvements in domain-specific tasks. Check out the full paper here: http://arxiv.org/abs/2405.14212v1 #AI #NLP #LLMs #PrivacyPreservation

PDF