Recent advances in neural network based language models lead to successful
deployments of such models, improving user experience in various applications.
It has been demonstrated that strong performance of language models comes along
with the ability to memorize rare training samples, which poses serious privacy
threats in case the model is trained on confidential user content. In this
work, we introduce a methodology that investigates identifying the user content
in the training data that could be leaked under a strong and realistic threat
model. We propose two metrics to quantify user-level data leakage by measuring
a model’s ability to produce unique sentence fragments within training data.
Our metrics further enable comparing different models trained on the same data
in terms of privacy. We demonstrate our approach through extensive numerical
studies on both RNN and Transformer based models. We further illustrate how the
proposed metrics can be utilized to investigate the efficacy of mitigations
like differentially private training or API hardening.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Inan_H/0/1/0/all/0/1">Huseyin A. Inan</a>, <a href="http://arxiv.org/find/cs/1/au:+Ramadan_O/0/1/0/all/0/1">Osman Ramadan</a>, <a href="http://arxiv.org/find/cs/1/au:+Wutschitz_L/0/1/0/all/0/1">Lukas Wutschitz</a>, <a href="http://arxiv.org/find/cs/1/au:+Jones_D/0/1/0/all/0/1">Daniel Jones</a>, <a href="http://arxiv.org/find/cs/1/au:+Ruhle_V/0/1/0/all/0/1">Victor R&#xfc;hle</a>, <a href="http://arxiv.org/find/cs/1/au:+Withers_J/0/1/0/all/0/1">James Withers</a>, <a href="http://arxiv.org/find/cs/1/au:+Sim_R/0/1/0/all/0/1">Robert Sim</a>

By admin