Background
Type: Book Chapter

Privacy in Generative Models: Attacks and Defense Mechanisms

Journal: ()Year: 1 January 2024Volume: Issue: Pages: 65 - 89
DOI:10.1007/978-3-031-46238-2_4Language: English

Abstract

The high ability of generative models to generate synthetic samples with distribution similar to real data samples brings many benefits in various applications. However, one of the most major elements in the success of generative models is the data that is used to train these models, and preserving privacy of this data is necessary. However, various studies have shown that the high capacity of genera-tive models leads to memorizing the details of the training data by these models, and different attacks have been conducted against generative models which infer information about training data from trained model. Also, many privacy-preserving mechanisms have been proposed to defend against these attacks. In this chapter, after introducing the topic, the privacy attacks against generative models and rele-vant defense mechanisms are discussed. In particular, the privacy attacks and related privacy preserving methods are categorized and discussed. Then, some challenges and future research directions are examined. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.