Skip to content

Differentially Private Synthetic Data via Foundation Model APIs 2: Text

Arxiv Link - 2024-03-04 05:57:50

Abstract

Text data has become extremely valuable due to the emergence of machine learning algorithms that learn from it. A lot of high-quality text data generated in the real world is private and therefore cannot be shared or used freely due to privacy concerns. Generating synthetic replicas of private text data with a formal privacy guarantee, i.e., differential privacy (DP), offers a promising and scalable solution. However, existing methods necessitate DP finetuning of large language models (LLMs) on private data to generate DP synthetic data. This approach is not viable for proprietary LLMs (e.g., GPT-3.5) and also demands considerable computational resources for open-source LLMs. Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models. In this work, we propose an augmented PE algorithm, named Aug-PE, that applies to the complex setting of text. We use API access to an LLM and generate DP synthetic text without any model training. We conduct comprehensive experiments on three benchmark datasets. Our results demonstrate that Aug-PE produces DP synthetic text that yields competitive utility with the SOTA DP finetuning baselines. This underscores the feasibility of relying solely on API access of LLMs to produce high-quality DP synthetic texts, thereby facilitating more accessible routes to privacy-preserving LLM applications. Our code and data are available at https://github.com/AI-secure/aug-pe.

Socials

LinkedIn X
🚀 Exciting developments in the realm of privacy-preserving text data generation! Researchers have introduced the Aug-PE algorithm, a novel approach that leverages API access to large language models (LLMs) to create differential privacy (DP) synthetic text without the need for model training. This breakthrough offers a scalable solution for generating high-quality synthetic text data while ensuring formal privacy guarantees.

The study conducted by Lin et al. (2024) presents compelling results, showcasing the efficacy of Aug-PE in producing DP synthetic text that rivals state-of-the-art DP finetuning methods. By making use of API access to LLMs, this innovative algorithm paves the way for more accessible and efficient privacy-preserving LLM applications.

For those interested in delving deeper into the research and exploring the code and data, check out the full paper at: http://arxiv.org/abs/2403.01749v1

#AI #NLP #LLMs #PrivacyPreservation #TextGeneration #TechInnovation
🚀 Exciting innovation in privacy-preserving text generation! Check out the Aug-PE algorithm, allowing for the generation of DP synthetic text without model training. Results show competitive utility with SOTA methods. Learn more at: http://arxiv.org/abs/2403.01749v1 #AI #NLP #LLMs #PrivacyPreservation

PDF