Machine learning models present a risk of adversarial attack when deployed in
production. Quantifying the contributing factors and uncertainties using
empirical measures could assist the industry with assessing the risk of
downloading and deploying common model types. This work proposes modifying the
traditional Drake Equation’s formalism to estimate the number of potentially
successful adversarial attacks on a deployed model. The Drake Equation is
famously used for parameterizing uncertainties and it has been used in many
research fields outside of its original intentions to estimate the number of
radio-capable extra-terrestrial civilizations. While previous work has outlined
methods for discovering vulnerabilities in public model architectures, the
proposed equation seeks to provide a semi-quantitative benchmark for evaluating
and estimating the potential risk factors for adversarial attacks.

Go to Source of this post
Author Of this post: <a href="">Josh Kalin</a>, <a href="">David Noever</a>, <a href="">Matthew Ciolino</a>

By admin