Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance. However, it requires sending data to annotators for labeling. This presents a possible privacy leak when the training set includes sensitive user data. In this paper, we describe an approach for carrying out privacy preserving active learning with quantifiable guarantees. We evaluate our approach by showing the tradeoff between privacy, utility and annotation budget on a binary classification task in a active learning setting.
|Title of host publication||Proceedings of the PAL: Privacy-Enhancing Artificial Intelligence and Language Technologies|
|Subtitle of host publication||As Part of the AAAI Spring Symposium Series (AAAI-SSS 2019)|
|Publisher||CEUR Workshop Proceedings|
|Publication status||Published - 26 Mar 2019|
|Name||CEUR Workshop Proceedings|
To appear at PAL: Privacy-Enhancing Artificial Intelligence and Language Technologies as part of the AAAI Spring Symposium Series (AAAI-SSS 2019)