Adaptive Security & Policy Control (ASPC) utilizes Generative Adversarial Networks (GANs) to protect Machine Learning (ML) models from adversarial attacks through a process known as adversarial training. This technique involves generating sophisticated "adversarial examples" (AEs)—malicious inputs designed to fool the detection system—and using them to retrain and harden the ML models against such threats.
The core strategy relies on the fact that Deep Neural Networks (DNNs) are often vulnerable to small, imperceptible perturbations in input data that can cause them to make incorrect predictions (e.g., classifying malicious traffic as benign). ASPC uses GANs to automate the generation of these attacks to strengthen the system:
ASPC implements a specific GAN architecture based on MalGAN to perform this task in a "black-box" setting, meaning the attacker does not need access to the target model's internal parameters.
Experimental evaluations demonstrated that retraining the ML model with these high-quality, GAN-generated adversarial examples significantly improved its resilience. In one test scenario involving crypto mining detection, the accuracy of the model in detecting new adversarial attacks increased to 99%, reducing the evasion ratio (successful attacks) from 48% to 1%.