Journal of Computer and Communications

Volume 9, Issue 5 (May 2021)

ISSN Print: 2327-5219   ISSN Online: 2327-5227

Google-based Impact Factor: 1.12  Citations  

Defense against Membership Inference Attack Applying Domain Adaptation with Addictive Noise

HTML  XML Download Download as PDF (Size: 718KB)  PP. 92-108  
DOI: 10.4236/jcc.2021.95007    367 Downloads   2,093 Views  Citations
Author(s)

ABSTRACT

Deep learning can train models from a dataset to solve tasks. Although deep learning has attracted much interest owing to the excellent performance, security issues are gradually exposed. Deep learning may be prone to the membership inference attack, where the attacker can determine the membership of a given sample. In this paper, we propose a new defense mechanism against membership inference: NoiseDA. In our proposal, a model is not directly trained on a sensitive dataset to alleviate the threat of membership inference attack by leveraging domain adaptation. Besides, a module called Feature Crafter has been designed to reduce the necessary training dataset from 2 to 1, which creates features for domain adaptation training using noise addictive mechanisms. Our experiments have shown that, with the noises properly added by Feature Crafter, our proposal can reduce the success of membership inference with a controllable utility loss.

Share and Cite:

Huang, H. (2021) Defense against Membership Inference Attack Applying Domain Adaptation with Addictive Noise. Journal of Computer and Communications, 9, 92-108. doi: 10.4236/jcc.2021.95007.

Cited by

[1] Membership Feature Disentanglement Network
Proceedings of the 2022 ACM on Asia …, 2022

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.