Virginia Tech® home

Research: Neuromorphic Computing and Chip Design

In our one-of-a-kind lab, we conduct joint research in the hardware and software domains of Neuromorphic Computing, ranging from low-level Memristors to high level algorithms for Spiking Neural Networks, with applications in the Wireless Communication domain and more!

This particular research setting enables us not only to theoretically contribute to Neuromorphic Computing, but also tightly couple and evaulate the algorithms on our specialized neuromorphic hardware. This greatly aids in getting the brain-inspired AI technologies closer to real-world deployment with applications in a variety of domains. Thank you for taking interest in our research; take a look below on what, why, and how we do research in our lab, and if enthusiastically intrigued, feel free to contact Dr. Yang (Cindy) Yi.

The research on encoding schemes began decades ago, focusing on Rate Encoding and Temporal Encoding. Rate Encoding simplifies input mapping by spike count but has low data density. Temporal Encoding, using spike timing, includes Time-to-First-Spike (TTFS) and Interspike-Interval (ISI) methods. Multiplexing Encoding combines schemes for higher data capacity, offering stability and efficiency in noisy environments, exemplified in our ASIC designs for ISI and multiplexing encoders.

Memristor crossbars address inefficiencies in AI applications by performing vector-matrix operations in-memory, reducing energy consumption and latency inherent in traditional von-Neumann architectures. Memristors, as emerging eNVM technology, simulate neural functions and aid in neuromorphic hardware like spiking neural networks and reservoir computing, advancing energy-efficient computing architectures.

Spiking Neural Networks (SNNs) use spikes for information processing, contrasting with traditional Artificial Neural Networks (ANNs) that use continuous real values. SNNs offer potential advantages in robustness and power efficiency, particularly on specialized neuromorphic hardware like Intel’s Loihi - a promising low-power AI solution suitable for edge and battery-powered systems, a focus of our lab’s research.

Brain-inspired Reservoir Computing, like Echo State Networks (ESN), offers efficient, data-driven algorithm design for communication systems. It addresses challenges in 5G/6G networks, such as scalability with massive MIMO and real-world robustness, showing superior performance in SDR-based MIMO-OFDM symbol detection for enhanced power efficiency and resilience.

Neuromorphic Computing, diverging from von-Neumann architecture, overcomes memory bottlenecks for low-power, low-cost, and low-latency designs. FPGA-based implementations optimize neuromorphic systems, enhancing efficiency in temporal tasks compared to traditional RNNs. Our experimental architectural adaptations like ESN and DFR on FPGAs explore novel circuit designs for future applications.