Skip to content

Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data

Arxiv Link - 2024-05-23 06:14:35

Abstract

As large language models (LLMs) demonstrate unparalleled performance and generalization ability, LLMs are widely used and integrated into various applications. When it comes to sensitive domains, as commonly described in federated learning scenarios, directly using external LLMs on private data is strictly prohibited by stringent data security and privacy regulations. For local clients, the utilization of LLMs to improve the domain-specific small language models (SLMs), characterized by limited computational resources and domain-specific data, has attracted considerable research attention. By observing that LLMs can empower domain-specific SLMs, existing methods predominantly concentrate on leveraging the public data or LLMs to generate more data to transfer knowledge from LLMs to SLMs. However, due to the discrepancies between LLMs' generated data and clients' domain-specific data, these methods cannot yield substantial improvements in the domain-specific tasks. In this paper, we introduce a Federated Domain-specific Knowledge Transfer (FDKT) framework, which enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy. The core insight is to leverage LLMs to augment data based on domain-specific few-shot demonstrations, which are synthesized from private domain data using differential privacy. Such synthetic samples share similar data distribution with clients' private data and allow the server LLM to generate particular knowledge to improve clients' SLMs. The extensive experimental results demonstrate that the proposed FDKT framework consistently and greatly improves SLMs' task performance by around 5\% with a privacy budget of less than 10, compared to local training on private data.

Socials

LinkedIn X
🚀 Exciting news in the field of AI and privacy protection! 🛡️ Our latest research introduces the Federated Domain-specific Knowledge Transfer (FDKT) framework, enabling the transfer of domain-specific knowledge from large language models (LLMs) to small language models (SLMs) while safeguarding clients' data privacy.

📈 The FDKT framework leverages LLMs to augment data based on domain-specific few-shot demonstrations synthesized from private domain data using differential privacy. This approach ensures that the synthetic samples share a similar data distribution with clients' private data, leading to significant improvements in SLMs' task performance by approximately 5% with a privacy budget of less than 10, compared to local training on private data.

🔍 Dive deeper into the results and methodology by checking out the full research paper at: http://arxiv.org/abs/2405.14212v1

#AI #PrivacyProtection #LLMs #FDKT #Research #DataPrivacy #TechInnovation
🚀 Exciting research in enhancing domain-specific small language models (SLMs) using Federated Domain-specific Knowledge Transfer (FDKT) framework! 🤖📚 Learn how to transfer knowledge from LLMs to SLMs while preserving data privacy: http://arxiv.org/abs/2405.14212v1 #AI #NLP #LLMs #DataPrivacy #Research #FDKT 📊🔒

PDF