Machine Learning (ML) has automated a multitude of our day-to-day decision
making domains such as education, employment and driving automation. The
continued success of ML largely depends on our ability to trust the model we
are using. Recently, a new class of attacks called Backdoor Attacks have been
developed. These attacks undermine the user’s trust in ML models. In this work,
we present NEO, a model agnostic framework to detect and mitigate such backdoor
attacks in image classification ML models. For a given image classification
model, our approach analyses the inputs it receives and determines if the model
is backdoored. In addition to this feature, we also mitigate these attacks by
determining the correct predictions of the poisoned images. An appealing
feature of NEO is that it can, for the first time, isolate and reconstruct the
backdoor trigger. NEO is also the first defence methodology, to the best of our
knowledge that is completely blackbox.
We have implemented NEO and evaluated it against three state of the art
poisoned models. These models include highly critical applications such as
traffic sign detection (USTS) and facial detection. In our evaluation, we show
that NEO can detect $approx$88% of the poisoned inputs on average and it is as
fast as 4.4 ms per input image. We also reconstruct the poisoned input for the
user to effectively test their systems.