Recently, the object detection based on deep learning has proven to be
vulnerable to adversarial patch attacks. The attackers holding a specially
crafted patch can hide themselves from the state-of-the-art person detectors,
e.g., YOLO, even in the physical world. This kind of attack can bring serious
security threats, such as escaping from surveillance cameras. In this paper, we
deeply explore the detection problems about the adversarial patch attacks to
the object detection. First, we identify a leverageable signature of existing
adversarial patches from the point of the visualization explanation. A fast
signature-based defense method is proposed and demonstrated to be effective.
Second, we design an improved patch generation algorithm to reveal the risk
that the signature-based way may be bypassed by the techniques emerging in the
future. The newly generated adversarial patches can successfully evade the
proposed signature-based defense. Finally, we present a novel
signature-independent detection method based on the internal content semantics
consistency rather than any attack-specific prior knowledge. The fundamental
intuition is that the adversarial object can appear locally but disappear
globally in an input image. The experiments demonstrate that the
signature-independent method can effectively detect the existing and improved
attacks. It has also proven to be a general method by detecting unforeseen and
even other types of attacks without any attack-specific prior knowledge. The
two proposed detection methods can be adopted in different scenarios, and we
believe that combining them can offer a comprehensive protection.

Go to Source of this post
Author Of this post: <a href="">Bin Liang</a>, <a href="">Jiachun Li</a>, <a href="">Jianjun Huang</a>

By admin