WiFi Signals Reveal Human Activities Through Walls by Mapping Body Keypoints


A new open-source edge AI system called π RuView is turning ordinary WiFi infrastructure into a through-wall human-sensing platform detecting body pose, vital signs, and movement patterns without a single camera, raising urgent security and surveillance concerns.

Researchers and developers have long theorized that ambient radio signals could be weaponized for passive surveillance. That theory is now production-ready code.

RuView, built by developer Reuven Cohen and available on GitHub, implements WiFi DensePose, a sensing technique originally pioneered by Carnegie Mellon University, as a practical, low-cost edge system that reconstructs full-body human poses through walls using only standard WiFi signals.

How the Attack Surface Works

At its core, the system exploits Channel State Information (CSI) metadata that WiFi hardware already collects to optimize signal transmission.

When a human body moves within a wireless environment, it distorts signal paths across dozens of OFDM subcarriers. RuView’s signal processing pipeline captures these disturbances at 54,000 frames per second using Rust, extracts amplitude and phase variations, and feeds them through a modified DensePose-RCNN deep learning architecture borrowed from computer vision.

The result is a real-time reconstruction of 24 body surface regions, arms, torso, head, and joints mapped to UV coordinates that mirror what a camera would see, but derived entirely from RF signals.

google

Vital sign extraction runs in parallel: bandpass filtering at 0.1–0.5 Hz captures breathing (6–30 BPM), while 0.8–2.0 Hz filtering detects heart rate (40–120 BPM).

The most alarming security dimension is the hardware barrier or the near absence of one. RuView deploys on ESP32 microcontroller nodes costing approximately $1 each, forming a multistatic sensor mesh.

Four to six nodes combine 12+ overlapping signal paths for 360-degree room coverage with sub-inch accuracy, operating entirely offline with no cloud dependency.

Through-wall detection extends up to 5 meters depth using Fresnel zone geometry and multipath modeling. The system learns the RF “fingerprint” of each room over time, then subtracts the static environment to isolate human motion, a persistent field model that can also detect signal spoofing attempts. Presence detection latency is under 1 millisecond.

Unlike cameras, which are regulated under GDPR, CCPA, and physical installation laws, passive WiFi CSI sensing is invisible and requires no physical access to the target environment.

Legal analysis has noted that “it’s quite difficult to ask pedestrians for permission in advance,” and consent frameworks collapse entirely when sensing is passive.

GDPR already classifies WiFi tracking identifiers as personal data, yet CSI-based body pose extraction exists in a regulatory grey zone with no specific controls.

The attack scenario is straightforward: a threat actor plants a $5 ESP32 node in a building’s common area or near a WiFi access point, deploys RuView via Docker (docker pull ruvnet/wifi-densepose:latest), and begins silently mapping occupants’ movements, routines, and even biometric vitals through walls.

Security teams should treat passive RF sensing as an emerging physical-layer threat vector. Mitigations include RF shielding in sensitive facilities, monitoring for rogue ESP32-class devices on network segments, and advocating for regulatory frameworks that extend surveillance law to cover CSI-based human tracking before the technology outpaces policy entirely.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link