Research

Vision

My research focus is computational analog systems that embed intelligence in the physical environment to increase spectral efficiency of communications and resolution of sensing. Currently, the physical environment is the fundamental barrier to wireless systems. Radars cannot see through objects or around corners, and communication links degrade the moment direct signal paths are blocked by walls or obstacles. For decades, researchers have improved endpoints like phones and radars with more antennas, power, and complex digital processing. However, endpoints can only shape own transmissions, not the propagation environment.

Instead of treating the environment as an adversary, my work transforms it into an active, intelligent part of the system. Strategically deployed on buildings and roadsides, my embedded hardware maintains connectivity through walls, reflect signals around corners, and detects hidden objects by sensing scattered reflections. My devices move digital processing workload to analog, performing computation directly on physical signals and manipulating them almost instantaneously with minimal power.

Dual-Mode Metamaterial Surface for mmWave Networks

High-frequency millimeter-wave (mmWave) signals (24-71 GHz) provide multi-Gbit/sec data rates from their massive bandwidth. However, they cannot penetrate walls and are easily blocked by obstacles, creating dead zones in both indoor and outdoor environments. I developed programmable surfaces that mount on walls and vehicles to control how mmWave signals propagate. Think of them as smart walls that can make themselves transparent like glass or reflective like mirror to radio waves on demand. They consist of over 4,000 tiny programmable metamaterials, artificially engineered elements that manipulate radio waves. Acting like a microscopic array of adjustable mirrors, these elements instantaneously control how signals bend. By electronically configuring them, a single surface can create new paths through itself, reflect signals around obstacles, and shape beams in complex patterns.

mmWall (NSDI ‘23, HotMobile ‘21) is the first steerable metamaterial surface that can refract mmWave signals through itself or reflect them around obstacles. mmWall dynamically switches between two modes: (1) glass mode: steering outdoor signals through the surface to reach indoor users, and (2) mirror mode: reflecting signals around obstacles to reach blind spots. The core innovation is a novel, see-through 3D structure. mmWall has horizontally stacked ribs that manipulate signals as they propagate through the 3D structure itself. Unlike repeaters that receive, decode, and retransmit entire packets, mmWall simply redirects passing waves, bypassing the complexity and latency of digital processing. I co-designed a link-layer protocol that leaves existing cellular and Wi-Fi systems unchanged. Designed and custom-built over three years by me, the tablet-sized (10×20 cm) prototype steers over a 320° range with 86% maximum efficiency, while consuming hundreds of microwatts of power. It achieves 29-30 dB maximum gain, ensuring >20 dB signal strength across an entire 10×8 m room, eliminating dead zones caused by wall blockage. This work was highlighted in media outlets, including Princeton News and TechXplore. Demo videos are available in: https://youtu.be/vEQYQPOq1qw.

This video summarizes mmWall’s key contributions in 3 minutes:

Dual-Beam Metamaterial Surface for Vehicular Networks

In high-mobility scenarios, mmWave communications suffer from frequent disruptions due to physical obstructions and rapid cell transitions. To solve this challenge, I developed Wall-Street (MobiCom ’26, MobiCom ’24 demo), the first metamaterial surface that simultaneously controls two independent mmWave beams for reliable roadside networking. Wall-Street works in two modes: (1) dual-link mode: the surface combines signals from two cells, maintaining connectivity even if one link fails, and (2) search mode: the surface scans for stronger cells on behalf of in-vehicle users, ensuring a reliable cell transition. I co-designed an effective cross-layer protocol that performs a single, batched cell switch for all in-vehicle users, reducing user overhead and energy consumption. Implemented on the Rutgers COSMOS testbed, my system achieves a 78% throughput improvement and a 64% reduction in connection outages. The prototype was live-demonstrated at MobiCom ’24 and 6G-XCEL, and demo videos are available in: https://youtu.be/35iiBFYVkZY.

Multi-Frequency Metamaterial Surface for Dynamic Networks

I have built metamaterial surfaces that operate simultaneously or dynamically across multiple frequencies to address more complex spectral environments. First, satellite networks like Starlink use different frequencies for sending (uplink) and receiving (downlink) data. I built Wall-E (HotNets ’23), the first surface that steers two distinct frequency bands (10 and 15 GHz) with a single control. Simultaneously controlling the uplink and downlink beams simplifies the control overhead for satellite networks. Second, private shared-spectrum networks dynamically change their operating frequencies. I developed WaveFlex (CoNEXT ’24), the first adaptive surface that tunes its operating frequency in real-time to maintain a reliable link when the network hops between different frequencies. WaveFlex is integrated with a customized 5G channel monitor to sniff network conditions and autonomously reconfigure, eliminating the need to exchange control messages. Tested with commercial cells and the Google Spectrum Access System, WaveFlex provides an 8.58 dB SNR improvement and a 10.77 Mbps throughput gain. This work was covered by Hackster.io and Arduino.

AI-assisted massive IoT Networks

An intelligent environment needs a brain to process wireless data into actionable intelligence. Inspired by computer vision, I developed a multi-view learning algorithm that merges wireless channels from distributed sensors to model the invisible propagation environment. This approach addresses a fundamental challenge in massive Internet-of-Things (IoT) networks where access points need channel information from each device to allocate resources efficiently, but collecting this information incurs huge overhead that scales with the number of sensors. I developed CLCP (Best Paper in MobiHoc ’23, ToN ’25), which predicts channels across sensors to minimize network overhead. I made two key contributions: (1) I adapted multi-view learning from computer vision to wireless communication. Like reconstructing 3D scenes from photos, CLCP treats each transmission as an RF snapshot, combining sparse observations from a subset of devices to form a joint representation and predict channels for adjacent devices; (2) I developed an adaptive view combiner that merges multiple snapshots reliably under fluctuating traffic patterns and varying number of reporting devices.