My research focus is computational analog systems that embed intelligence in the physical environment to increase spectral efficiency of communications and resolution of sensing. Currently, the physical environment is the fundamental barrier to wireless systems. Radars cannot see through objects or around corners, and communication links degrade the moment direct signal paths are blocked by walls or obstacles. For decades, researchers have improved endpoints like phones and radars with more antennas, power, and complex digital processing. However, endpoints can only shape own transmissions, not the propagation environment.
Instead of treating the environment as an adversary, my work transforms it into an active, intelligent part of the system. Strategically deployed on buildings and roadsides, my embedded hardware maintains connectivity through walls, reflect signals around corners, and detects hidden objects by sensing scattered reflections. My devices move digital processing workload to analog, performing computation directly on physical signals and manipulating them almost instantaneously with minimal power.

mmWall (NSDI ‘23, HotMobile ‘21) is the first steerable metamaterial surface that can refract mmWave signals through itself or reflect them around obstacles. mmWall dynamically switches between two modes: (1) glass mode: steering outdoor signals through the surface to reach indoor users, and (2) mirror mode: reflecting signals around obstacles to reach blind spots. The core innovation is a novel, see-through 3D structure. mmWall has horizontally stacked ribs that manipulate signals as they propagate through the 3D structure itself. Unlike repeaters that receive, decode, and retransmit entire packets, mmWall simply redirects passing waves, bypassing the complexity and latency of digital processing. I co-designed a link-layer protocol that leaves existing cellular and Wi-Fi systems unchanged. Designed and custom-built over three years by me, the tablet-sized (10×20 cm) prototype steers over a 320° range with 86% maximum efficiency, while consuming hundreds of microwatts of power. It achieves 29-30 dB maximum gain, ensuring >20 dB signal strength across an entire 10×8 m room, eliminating dead zones caused by wall blockage. This work was highlighted in media outlets, including Princeton News and TechXplore. Demo videos are available in: https://youtu.be/vEQYQPOq1qw.
This video summarizes mmWall’s key contributions in 3 minutes:



An intelligent environment needs a brain to process wireless data into actionable intelligence. Inspired by computer vision, I developed a multi-view learning algorithm that merges wireless channels from distributed sensors to model the invisible propagation environment. This approach addresses a fundamental challenge in massive Internet-of-Things (IoT) networks where access points need channel information from each device to allocate resources efficiently, but collecting this information incurs huge overhead that scales with the number of sensors. I developed CLCP (Best Paper in MobiHoc ’23, ToN ’25), which predicts channels across sensors to minimize network overhead. I made two key contributions: (1) I adapted multi-view learning from computer vision to wireless communication. Like reconstructing 3D scenes from photos, CLCP treats each transmission as an RF snapshot, combining sparse observations from a subset of devices to form a joint representation and predict channels for adjacent devices; (2) I developed an adaptive view combiner that merges multiple snapshots reliably under fluctuating traffic patterns and varying number of reporting devices.