Background
Type: Article

ADAM-DPGAN: a differential private mechanism for generative adversarial network

Journal: Applied Intelligence (0924669X)Year: May 2023Volume: 53Issue: Pages: 11142 - 11161
DOI:10.1007/s10489-022-03902-9Language: English

Abstract

Privacy preserving data release is a major concern of many data mining applications. Using Generative Adversarial Networks (GANs) to generate an unlimited number of synthetic samples is a popular replacement for data sharing. However, GAN models are known to implicitly memorize details of sensitive data used for training. To this end, this paper proposes ADAM-DPGAN, which guarantees differential privacy of training data for GAN models. ADAM-DPGAN specifies the maximum effect of each sensitive training record on the model parameters at each step of the learning procedure when the Adam optimizer is used, and adds appropriate noise to the parameters during the training procedure. ADAM-DPGAN leverages Rényi differential privacy account to track the spent privacy budgets. In contrast to prior work, by accurately determining the effect of each training record, this method can distort parameters more precisely and generate higher quality outputs while preserving the convergence properties of GAN counterparts without privacy leakage as proved. Through experimental evaluations on different image datasets, the ADAM-DPGAN is compared to previous methods and the superiority of the ADAM-DPGAN over the previous methods is demonstrated in terms of visual quality, realism and diversity of generated samples, convergence of training, and resistance to membership inference attacks. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.