Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints
Deep Learning (DL) algorithms are being applied to network intrusion detection, as they can outperform other methods in terms of computational efficiency and accuracy. However, these algorithms have recently been found to be vulnerable to adversarial examples – inputs that are crafted with the intent of causing a Deep Neural Network (DNN) to misclassify with high confidence. Although a significant amount of work has been done to find robust defence techniques against adversarial examples, they still pose a potential risk. The majority of the proposed attack and defence strategies are tailored to the computer vision domain, in which adversarial examples were first found. In this paper, we consider this issue in the Network Intrusion Detection System (NIDS) domain and extend existing adversarial example crafting algorithms to account for the domain-specific constraints in the feature space. We propose to incorporate information about the difficulty of feature manipulation directly in the optimization function. Additionally, we define a novel measure for attack cost and include it in the assessment of the robustness of DL algorithms. We validate our approach on two benchmark datasets and demonstrate successful attacks against state-of-the-art DL network intrusion detection algorithms.
Top- Teuffenbach, Martin
- Piatkowska, Ewa
- Smith, Paul
Category |
Paper in Conference Proceedings or in Workshop Proceedings (Paper) |
Event Title |
CD-Make International Cross-Domain Conference for Machine Learning and Knowledge Extraction |
Divisions |
Data Mining and Machine Learning |
Event Location |
Dublin, Irland |
Event Type |
Conference |
Event Dates |
25.-28.08.2020 |
Series Name |
Machine Learning and Knowledge Extraction |
ISSN/ISBN |
978-3-030-57320-1 |
Page Range |
pp. 301-320 |
Date |
25 August 2020 |
Export |