Description:

INV-21107

 

Background

The explosive growth in wireless technology and the need for sharing scarce spectrum has resulted in many scenarios where different protocols and devices must co-exist in the same frequency bands. Given such heterogeneity in operation, there are complex challenges in gaining a holistic awareness of the ongoing activity in the spectrum band, which typically relies on signal analysis. However, it is challenging to model signal analysis via analytical approaches that lead us to apply machine learning methods. Machine learning methods proposed for such purposes are based on a special class of architectures called convolutional neural networks (CNNs) that require considerable power, compute resources for training and inference, as well as incur additional latency of delivering data to a remote mobile edge computing center. Such limitations raise many new challenges in real-time learning for wireless applications and it is difficult to ensure the inference decision is relayed efficiently and timely to-and-from between the field sensor and the computing center. Thus, there is a need to enable fast inference on trained machine learning architectures and accomplish this at the sensor locations without specialized computing hardware.

 

Technology Overview

Researchers at Northeastern developed the first-of-its-kind over-the-air convolution, which is demonstrated as the key processing step for inference tasks in a CNN. The ambient wireless propagation environment through reconfigurable intelligent surfaces (RIS) is engineered to design such an architecture, called “AirNN”. Unlike classical communication where the receiver must react to the channel-induced transformation, generally represented as finite impulse response (FIR) filter, AirNN proactively creates the signal reflections to emulate specific FIR filters through RIS. The proposed approach of processor-free inference for an example modulation classification task in a testbed of custom-designed RIS is experimentally demonstrated; an architecture that does not (i) require signal storage (that can easily reach several GB for seconds of IQ samples), (ii) incurs data forwarding latency to the edge computing server, and (iii) consumes power in a dedicated processor/GPU.

 

Benefits

  • Enables processor-free inference for neural networks for use in a range of applications
  • Doesn’t require signal storage
  • Doesn’t incur data forwarding latency to the edge computing server
  • Doesn’t consume power in a dedicated processor/GPU

 

Applications

  • Next generation of wireless communications
  • Sustainable and intelligent infrastructures
  • Public safety and military applications

 

Opportunity

  • Commercial partner
  • Research collaboration
  • Licensing

 

Patent Information:
Category(s):
-Networks
For Information, Contact:
Mark Saulich
Associate Director of Commercialization
Northeastern University
m.saulich@northeastern.edu
Inventors:
Kaushik Chowdhury
Yousof Naderi
Ufuk Muncuk
Keywords: