2021/7/18

Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants

Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach

2021 International Joint Conference on Neural Networks (IJCNN), 1-10, 2021

Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN …