Over the past six years, deep generative models have achieved a qualitatively
new level of performance. Generated data has become difficult, if not
impossible, to be distinguished from real data. While there are plenty of use
cases that benefit from this technology, there are also strong concerns on how
this new technology can be misused to spoof sensors, generate deep fakes, and
enable misinformation at scale. Unfortunately, current deep fake detection
methods are not sustainable, as the gap between real and fake continues to
close. In contrast, our work enables a responsible disclosure of such
state-of-the-art generative models, that allows researchers and companies to
fingerprint their models, so that the generated samples containing a
fingerprint can be accurately detected and attributed to a source. Our
technique achieves this by an efficient and scalable ad-hoc generation of a
large population of models with distinct fingerprints. Our recommended
operation point uses a 128-bit fingerprint which in principle results in more
than $10^{36}$ identifiable models. Experiments show that our method fulfills
key properties of a fingerprinting mechanism and achieves effectiveness in deep
fake detection and attribution.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Yu_N/0/1/0/all/0/1">Ning Yu</a>, <a href="http://arxiv.org/find/cs/1/au:+Skripniuk_V/0/1/0/all/0/1">Vladislav Skripniuk</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_D/0/1/0/all/0/1">Dingfan Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Davis_L/0/1/0/all/0/1">Larry Davis</a>, <a href="http://arxiv.org/find/cs/1/au:+Fritz_M/0/1/0/all/0/1">Mario Fritz</a>

By admin