Despite the large body of academic work on machine learning security, little
is known about the occurrence of attacks on machine learning systems in the
wild. In this paper, we report on a quantitative study with 139 industrial
practitioners. We analyze attack occurrence and concern and evaluate
statistical hypotheses on factors influencing threat perception and exposure.
Our results shed light on real-world attacks on deployed machine learning. On
the organizational level, while we find no predictors for threat exposure in
our sample, the amount of implement defenses depends on exposure to threats or
expected likelihood to become a target. We also provide a detailed analysis of
practitioners’ replies on the relevance of individual machine learning attacks,
unveiling complex concerns like unreliable decision making, business
information leakage, and bias introduction into models. Finally, we find that
on the individual level, prior knowledge about machine learning security
influences threat perception. Our work paves the way for more research about
adversarial machine learning in practice, but yields also insights for
regulation and auditing.
Related Stories
March 26, 2023