Abstract
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years. With their ability to directly learn the probability distribution of data and then sample synthetic realistic data. Many applications have emerged, using GANs to solve classical problems in machine learning, such as data augmentation, class imbalance problems, and fair representation learning. In this paper, we analyze and highlight the fairness concerns of GANs. In this regard, we show empirically that GANs models may inherently prefer certain groups during the training process and therefore they're not able to homogeneously generate data from different groups during the testing phase. Furthermore, we propose solutions to solve this issue by conditioning the GAN model towards samples' groups or using the ensemble method (boosting) to allow the GAN model to leverage distributed structure of data during the training phase and generate groups at an equal rate during the testing phase.
Original language | English |
---|---|
Title of host publication | 2021 International Conference "Nonlinearity, Information and Robotics", NIR 2021 |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Number of pages | 8 |
ISBN (Electronic) | 9781665424066 |
DOIs | |
Publication status | Unpublished - 1 Mar 2021 |
Event | 2021 International Conference "Nonlinearity, Information and Robotics", NIR 2021 - Innopolis, Russian Federation Duration: 26 Aug 2021 → 29 Aug 2021 |
Publication series
Name | 2021 International Conference "Nonlinearity, Information and Robotics", NIR 2021 |
---|
Conference
Conference | 2021 International Conference "Nonlinearity, Information and Robotics", NIR 2021 |
---|---|
Country/Territory | Russian Federation |
City | Innopolis |
Period | 26/08/21 → 29/08/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE.
Keywords
- Fairness
- Generative Adversarial Networks
- Group Imbalance
- Representation Bias