INV-20057
Background
Due to the sheer number of IoT devices being deployed worldwide, the design of practical spectrum knowledge extraction techniques is a necessity to understanding, in real time, the wireless environment. From this, more reactive, intelligent, and secure wireless protocols, systems, and architectures can be designed.
Recent advances in wireless deep learning have demonstrated its great potential. However, it has been proven that neural networks are prone to be “hacked” by carefully crafting small-scale perturbations to the input. This activity is also known as adversarial machine learning (AML). Intuitively, the degree to which malicious wireless agents can find adversarial examples is critically correlated to the applicability of neural networks to problems in the wireless domain.
Adversaries try to fool/mislead machine learning classifiers into thinking that it recognizes a device based on the signal/transmission. The ultimate goal is to develop countermeasures so that IoT devices can’t be hacked.
Technology Overview
Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that an adversary can “crack” a classifier by designing inputs that “steer” the classifier away from the ground truth. This technology develops a new neural network architecture called WaveNet, which combines the concept from deep learning and signal processing to “hack” a classifier based only on its output.
In this technology, Northeastern University researchers postulated a series of adversarial attacks and mathematically formulated a Generalized Wireless Adversarial Machine Learning Problem (GWAP) to analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks.
The presence of non-stationarity makes wireless adversarial machine learning significantly more challenging and so NU researchers have an algorithm is in development to solve the GWAP in:
- a “white box” setting where the adversary has access to the deep learning model
- a “black box” setting where the deep learning model is not available.
For the latter, there is a new neural network architecture called WaveNet, which combines concepts from deep learning and signal processing to “hack” a classifier based only on its output.
Key Benefits
- Feasibility of wireless adversarial machine learning algorithms on real-world datasets and models
- Prevent neural networks from being hacked
- Decrease the classifiers’ accuracy up to 3x while keeping the waveform distortion to a minimum
Commercial Applications
- More robust machine learning models in the wireless domain
- Tactical applications (cracking radio fingerprinting, modulation recognition)