The research on single image dehazing task has been widely explored. However,
as far as we know, no comprehensive study has been conducted on the robustness
of the well-trained dehazing models. Therefore, there is no evidence that the
dehazing networks can resist malicious attacks. In this paper, we focus on
designing a group of attack methods based on first order gradient to verify the
robustness of the existing dehazing algorithms. By analyzing the general goal
of image dehazing task, five attack methods are proposed, which are prediction,
noise, mask, ground-truth and input attack. The corresponding experiments are
conducted on six datasets with different scales. Further, the defense strategy
based on adversarial training is adopted for reducing the negative effects
caused by malicious attacks. In summary, this paper defines a new challenging
problem for image dehazing area, which can be called as adversarial attack on
dehazing networks (AADN). Code is available at
https://github.com/guijiejie/AADN.
Related Stories
June 3, 2023