Deep Learning (DL) is vulnerable to out-of-distribution and adversarial
examples resulting in incorrect outputs. To make DL more robust, several
posthoc (or runtime) anomaly detection techniques to detect (and discard) these
anomalous samples have been proposed in the recent past. This survey tries to
provide a structured and comprehensive overview of the research on anomaly
detection for DL based applications. We provide a taxonomy for existing
techniques based on their underlying assumptions and adopted approaches. We
discuss various techniques in each of the categories and provide the relative
strengths and weaknesses of the approaches. Our goal in this survey is to
provide an easier yet better understanding of the techniques belonging to
different categories in which research has been done on this topic. Finally, we
highlight the unsolved research challenges while applying anomaly detection
techniques in DL systems and present some high-impact future research

Go to Source of this post
Author Of this post: <a href="">Saikiran Bulusu</a>, <a href="">Bhavya Kailkhura</a>, <a href="">Bo Li</a>, <a href="">Pramod K. Varshney</a>, <a href="">Dawn Song</a>

By admin