New AI-Powered Wi-Fi Biometrics WhoFi Tracks Humans Behind Walls with 95.5% Accuracy

New AI-Powered Wi-Fi Biometrics WhoFi Tracks Humans Behind Walls with 95.5% Accuracy

WhoFi surfaced last on the public repository ArXiv, stunning security teams with a proof-of-concept that turns ordinary 2.4 GHz routers into covert biometric scanners.

Unlike camera-based systems, this neural pipeline fingerprints the unique way a body distorts Wi-Fi channel state information (CSI), letting an attacker identify someone from the opposite side of a plaster wall, in darkness, or through light foliage.

Early reverse-engineering shows it needs only a single-antenna transmitter and a three-antenna receiver—hardware found in many mid-range consumer access points—making large-scale deployment trivially inexpensive.

Google News

Researchers initially framed WhoFi as a privacy-preserving alternative to CCTV, but its publication immediately triggered red-team interest.

Within forty-eight hours, underground forums circulated turnkey Docker images embedding the full PyTorch model and a lightweight CSI sniffer powered by the open-source NexMon firmware.

The security analysts (Danilo Avola, Daniele Pannone, Dario Montagnini, and Emad Emam) noted that the repositories already include scripts for automatic target enrollment: a would-be spy merely walks a hall with a smartphone, captures 100 Wi-Fi packets per person, and the transformer encoder—reportedly achieving 95.5% Rank-1 precision—learns a radio “fingerprint” that remains stable even if the subject changes clothes or carries a backpack.

From a network-intrusion standpoint, the most alarming vector is that the malware never touches the endpoint.

All computation runs on an attacker-controlled box collocated with the access point; packet captures flow over a mirrored port, invisible to host-based EDR.

No JavaScript beacons, no phishing payloads—just passive RF collection. A single hidden SSID named “radar” is broadcast to keep the modulation parameters constant, but victims’ devices need not associate.

Detection-Evasion via In-Batch Negative Learning

Once exfiltrated CSI slices reach the GPU, WhoFi executes a persistence tactic few defenses monitor: model-level re-training. The code continually fine-tunes embeddings using an in-batch negative loss that forces fresh signatures to collapse toward their historical centroid while repelling others.

Because retraining alters only weights inside ~/models/whofi.ckpt, no new binaries hit disk, evading integrity monitors.

# whofi_persist.py — model self-refresh loop
batch_q, batch_g = sampler.next()           # passive CSI queue
S_q, S_g = model(batch_q), model(batch_g)    # embed signatures
sim = torch.mm(S_q, S_g.T)                  # cosine (l2-normed)
loss = F.cross_entropy(sim, torch.arange(sim.size(0)))
loss.backward(); optimizer. Step()           # silent in-place update

Security controls that rely on static hashes or periodic memory snapshots miss this mutation; every epoch subtly reshapes the hypersphere without spawning a new process.

Analysts can instead hunt for anomalous GPU kernels invoked by libtorch_cuda.so on otherwise headless Wi-Fi controllers or watch for persistent 20 MB-per-minute CSI traffic surges on switch mirror ports.

New AI-Powered Wi-Fi Biometrics WhoFi Tracks Humans Behind Walls with 95.5% Accuracy
Encoder Architecture (Source – Arxiv)

The Encoder Architecture depicts the lightweight six-head transformer that fuels this stealth.

Until firmware vendors expose CSI access only to signed drivers—and until SOCs learn to flag sustained raw-802.11 captures—WhoFi represents a disquieting leap in non-invasive surveillance, placing radio-frequency biometrics squarely in the attacker’s toolkit.

Boost detection, reduce alert fatigue, accelerate response; all with an interactive sandbox built for security teams -> Try ANY.RUN Now


Source link