Digestive neural networks: A novel defense strategy against inference attacks in federated learning

Hongkyu Lee, Jeehyeong Kim, Seyoung Ahn, Rasheed Hussain, Sunghyun Cho, Junggab Son*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

41 Citations (Scopus)
136 Downloads (Pure)

Abstract

Federated Learning (FL) is an efficient and secure machine learning technique designed for decentralized computing systems such as fog and edge computing. Its learning process employs frequent communications as the participating local devices send updates, either gradients or parameters of their models, to a central server that aggregates them and redistributes new weights to the devices. In FL, private data does not leave the individual local devices, and thus, rendered as a robust solution in terms of privacy preservation. However, the recently introduced membership inference attacks pose a critical threat to the impeccability of FL mechanisms. By eavesdropping only on the updates transferring to the center server, these attacks can recover the private data of a local device. A prevalent solution against such attacks is the differential privacy scheme that augments a sufficient amount of noise to each update to hinder the recovering process. However, it suffers from a significant sacrifice in the classification accuracy of the FL. To effectively alleviate the problem, this paper proposes a Digestive Neural Network (DNN), an independent neural network attached to the FL. The private data owned by each device will pass through the DNN and then train the FL. The DNN modifies the input data, which results in distorting updates, in a way to maximize the classification accuracy of FL while the accuracy of inference attacks is minimized. Our simulation result shows that the proposed DNN shows significant performance on both gradient sharing- and weight sharing-based FL mechanisms. For the gradient sharing, the DNN achieved higher classification accuracy by 16.17% while 9% lower attack accuracy than the existing differential privacy schemes. For the weight sharing FL scheme, the DNN achieved at most 46.68% lower attack success rate with 3% higher classification accuracy.

Original languageEnglish
Article number102378
JournalComputers and Security
Volume109
Early online date23 Jun 2021
DOIs
Publication statusPublished - Oct 2021

Bibliographical note

Funding Information:
This work was supported by the MSIT (Ministry of Science, ICT), Korea, under the High-Potential Individuals Global Training Program (2019-0-01601) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).

Publisher Copyright:
© 2021

Keywords

  • AI Security
  • Digestive neural networks
  • Federated learning (FL)
  • Federated learning security
  • Inference attack
  • ML Security
  • t-SNE analysis
  • White-box assumption

Fingerprint

Dive into the research topics of 'Digestive neural networks: A novel defense strategy against inference attacks in federated learning'. Together they form a unique fingerprint.

Cite this