Deep Neural Networks have achieved unprecedented success in the field of face
recognition such that any individual can crawl the data of others from the
Internet without their explicit permission for the purpose of training
high-precision face recognition models, creating a serious violation of
privacy. Recently, a well-known system named Fawkes (published in USENIX
Security 2020) claimed this privacy threat can be neutralized by uploading
cloaked user images instead of their original images. In this paper, we present
Oriole, a system that combines the advantages of data poisoning attacks and
evasion attacks, to thwart the protection offered by Fawkes, by training the
attacker face recognition model with multi-cloaked images generated by Oriole.
Consequently, the face recognition accuracy of the attack model is maintained
and the weaknesses of Fawkes are revealed. Experimental results show that our
proposed Oriole system is able to effectively interfere with the performance of
the Fawkes system to achieve promising attacking results. Our ablation study
highlights multiple principal factors that affect the performance of the Oriole
system, including the DSSIM perturbation budget, the ratio of leaked clean user
images, and the numbers of multi-cloaks for each uncloaked image. We also
identify and discuss at length the vulnerabilities of Fawkes. We hope that the
new methodology presented in this paper will inform the security community of a
need to design more robust privacy-preserving deep learning models.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Chen_L/0/1/0/all/0/1">Liuqiao Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_H/0/1/0/all/0/1">Hu Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhao_B/0/1/0/all/0/1">Benjamin Zi Hao Zhao</a>, <a href="http://arxiv.org/find/cs/1/au:+Xue_M/0/1/0/all/0/1">Minhui Xue</a>, <a href="http://arxiv.org/find/cs/1/au:+Qian_H/0/1/0/all/0/1">Haifeng Qian</a>

By admin