Machine learning algorithms are known to be susceptible to data poisoning
attacks, where an adversary manipulates the training data to degrade
performance of the resulting classifier. In this work, we present a unifying
view of randomized smoothing over arbitrary functions, and we leverage this
novel characterization to propose a new strategy for building classifiers that
are pointwise-certifiably robust to general data poisoning attacks. As a
specific instantiation, we utilize our framework to build linear classifiers
that are robust to a strong variant of label flipping, where each test example
is targeted independently. In other words, for each test point, our classifier
includes a certification that its prediction would be the same had some number
of training labels been changed adversarially. Randomized smoothing has
previously been used to guarantee—with high probability—test-time
robustness to adversarial manipulation of the input to a classifier; we derive
a variant which provides a deterministic, analytical bound, sidestepping the
probabilistic certificates that traditionally result from the sampling
subprocedure. Further, we obtain these certified bounds with minimal additional
runtime complexity over standard classification and no assumptions on the train
or test distributions. We generalize our results to the multi-class case,
providing the first multi-class classification algorithm that is certifiably
robust to label-flipping attacks.

Go to Source of this post
Author Of this post: <a href="">Elan Rosenfeld</a>, <a href="">Ezra Winston</a>, <a href="">Pradeep Ravikumar</a>, <a href="">J. Zico Kolter</a>

By admin