We consider adversarial training of deep neural networks through the lens of
Bayesian learning, and present a principled framework for adversarial training
of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on
techniques from constraint relaxation of non-convex optimisation problems and
modify the standard cross-entropy error model to enforce posterior robustness
to worst-case perturbations in $epsilon$-balls around input points. We
illustrate how the resulting framework can be combined with methods commonly
employed for approximate inference of BNNs. In an empirical investigation, we
demonstrate that the presented approach enables training of certifiably robust
models on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for
uncertainty calibration. Our method is the first to directly train certifiable
BNNs, thus facilitating their deployment in safety-critical applications.

Go to Source of this post
Author Of this post: <a href="http://arxiv.org/find/cs/1/au:+Wicker_M/0/1/0/all/0/1">Matthew Wicker</a>, <a href="http://arxiv.org/find/cs/1/au:+Laurenti_L/0/1/0/all/0/1">Luca Laurenti</a>, <a href="http://arxiv.org/find/cs/1/au:+Patane_A/0/1/0/all/0/1">Andrea Patane</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_Z/0/1/0/all/0/1">Zhoutong Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_Z/0/1/0/all/0/1">Zheng Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Kwiatkowska_M/0/1/0/all/0/1">Marta Kwiatkowska</a>

By admin