Convolutional Neural Networks (CNNs) models are one of the most frequently
used deep learning networks, and extensively used in both academia and
industry. Recent studies demonstrated that adversarial attacks against such
models can maintain their effectiveness even when used on models other than the
one targeted by the attacker. This major property is known as transferability,
and makes CNNs ill-suited for security applications. In this paper, we provide
the first comprehensive study which assesses the robustness of CNN-based models
for computer networks against adversarial transferability. Furthermore, we
investigate whether the transferability property issue holds in computer
networks applications. In our experiments, we first consider five different
attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based
Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS
(L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then,
we perform these attacks against three well- known datasets: the Network-based
Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA)
dataset, and the RIPE Atlas dataset. Our experimental results show clearly that
the transferability happens in specific use cases for the I- FGSM, the JSMA,
and the LBFGS attack. In such scenarios, the attack success rate on the target
network range from 63.00% to 100%. Finally, we suggest two shielding strategies
to hinder the attack transferability, by considering the Most Powerful Attacks
(MPAs), and the mismatch LSTM architecture.