Nowadays, with the rapidly exploding amount of user data, more companies
choose to train various business-specified machine learning models, especially
neural networks, to improve service quality. Nevertheless, the current machine
learning application is still a one-way trip for the user data. As long as
users contribute their data, there is no way to retreat the contribution. Such
an irreversible setting has two potential risks: 1) from a legislative point of
view, many national regulations emphasize that users should have the right to
remove their personal data; 2) from a security point of view, the unintended
memorization of a neural network increases the possibility of an adversary to
extract user’s sensitive information. To this end, memorization elimination for
machine learning models becomes a popular research topic.

Considering that there is no uniform indicator to evaluate a memorization
elimination method, we explore the concept of membership inference and define a
novel indicator, called forgetting rate. It well describes the transformation
rate of the eliminated data from “memorized” to “unknown” after conducting
memorization elimination. Furthermore, we propose Forsaken, a method that
allows users to eliminate the unintended memorization of their private data
from a trained neural network. The unintended memorization here is formed by
the out-of-distribution but sensitive data inadvertently uploaded by users.
Compared to prior work, our method avoids retraining, achieves higher
forgetting rate, and causes less accuracy loss through a trainable dummy
gradient generator.

Go to Source of this post
Author Of this post: <a href="">Yang Liu</a>, <a href="">Zhuo Ma</a>, <a href="">Ximeng Liu</a>, <a href="">Jian Liu</a>, <a href="">Zhongyuan Jiang</a>, <a href="">Jianfeng Ma</a>, <a href="">Philip Yu</a>, <a href="">Kui Ren</a>

By admin