The ASPC framework utilises GAN-based hardening to bolster the resilience of supervised neural networks against sophisticated evasion tactics and adversarial perturbations. While this method excels at identifying unseen variations of known threats, achieving an impressive detection accuracy of 99%, it is not intended to uncover fundamentally new categories of network breaches.
To address completely novel attack types, ASPC relies on unsupervised learning algorithms like DBSCAN, which identify anomalies without prior knowledge of specific signatures. Ultimately, the text illustrates a layered security strategy where GAN hardening secures existing models against manipulation, while complementary techniques handle the broader challenge of unknown intrusions.
The GAN-hardened models in ASPC are designed primarily to detect unknown variations of known attacks (specifically "adversarial examples" designed to evade detection) rather than fundamentally new, unknown categories of network attacks.
Here is a detailed breakdown of their capabilities regarding unknown evasion:
1. Resilience Against Unseen Evasion Attempts
The primary goal of the GAN-based hardening (Adversarial Training) is to protect the supervised Deep Neural Networks (DNNs) used in the Centralized Attack Detector (CAD) from sophisticated evasion techniques that attempt to fool the model using small perturbations.
- Testing against new evasions: In the evaluation, the retrained (hardened) model was tested against a reserved dataset of malign data that was not used during the GAN training. This demonstrated the model's ability to detect new adversarial examples (synthetic evasions) that it had never seen before.
- Performance: The hardening process significantly improved the model's resilience against these unknown evasions. The accuracy in detecting these new, unseen adversarial examples increased to 99%, reducing the evasion ratio (successful attacks) from an initial 48% down to 1%,.
2. Limitations Regarding "Unknown" Attack Types
While the GAN-hardened models excel at detecting new evasion methods for the specific attacks they are trained on (e.g., cryptomining), the sources draw a distinction between detecting evasions and detecting unknown attack types:
- Supervised vs. Unsupervised: The GAN-hardened models described are supervised learning models (e.g., DNNs). Sources note that supervised models rely on a priori knowledge and labeled datasets, making them highly accurate for known threats but less effective at detecting fundamentally new types of breaches,.
- Role of Unsupervised Learning: For detecting completely "previously unseen attacks" or "novel types of breaches," TeraFlow employs unsupervised learning (specifically the DBSCAN algorithm) rather than GAN-hardened supervised models. This approach clusters data to identify anomalies (outliers) without requiring prior knowledge of the attack signature,.
- Attack Requirements: The project has defined a requirement to detect "previously unseen attacks with 90% or higher accuracy," but the documentation associates meeting this requirement specifically with the unsupervised learning methods (Window-based Attack Detection) applied to the optical layer, rather than the GAN-hardened models at the IP layer.
3. Theoretical Limits
The documentation acknowledges that while GAN hardening significantly increases the difficulty for attackers, it does not guarantee perfect immunity. It is "still possible for an attacker to generate new AEs that can cause the model to behave improperly," though producing effective evasions becomes significantly harder and requires larger, more detectable perturbations,.