As deep neural networks (DNNs) become more prevalent, concerns over their security against backdoor attacks that implant hidden malicious functionalities have grown.
Cybersecurity researchers (Wenmin Chen and Xiaowei Xu) recently proposed DEBA, an invisible backdoor attack leveraging singular value decomposition (SVD) to embed imperceptible triggers during model training, causing predefined malicious behaviors.
DEBA replaces minor visual features of trigger images with those from clean images, preserving major features for indistinguishability.
Invisible Backdoor Attack – DEBA
Extensive evaluations show that DEBA achieves high attack success rates while maintaining the perceptual quality of poisoned images.
Furthermore, DEBA demonstrates robustness in evading and resisting existing defense measures against such attacks on DNNs.
The work highlights escalating threats of stealthy backdoor embeddings compromising the trustworthiness of deep learning models.
Deep neural networks (DNNs) receive backdoor attacks in the form of patches introduced by embedding as a starting point, with subsequent implementations becoming stealthy and invisible.
Free Webinar : Mitigating Vulnerability & 0-day Threats
Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities.:
- The problem of vulnerability fatigue today
- Difference between CVSS-specific vulnerability vs risk-based vulnerability
- Evaluating vulnerabilities based on the business impact/risk
- Automation to reduce alert fatigue and enhance security posture significantly
AcuRisQ, that helps you to quantify risk accurately:
A particular process evolves from visible backdoors into adversarial perturbations, label-consistent poisoning, edge-based dynamic triggers, and color shifts to make them look natural.
However, some previous attacks still leave visual traces that expose them as not completely invisible.
Besides this, recent research shows that backdoors can also be extended to face recognition systems used in real-world applications.
Initially targeting inference errors, these have changed towards creating secret resiliently embeddable backdoor threats which are more dangerous for DNNs deployed across different domains due to their credibility reasons and security concerns.
Yet it remains difficult to devise countermeasures against such disguised poisoning attacks.
Continuing to evolve, the silent back-door attacks on deep neural networks (DNNs) have made further research into effective defenses.
Such efforts concentrate on protecting data inputs, models, and output detection.
Input defenses analyze saliency maps and artifacts for poisoning-suspected anomalies. Model defenses remove backdoors by pruning neurons, fine-tuning, or distilling models.
Output detection identifies infected models by measuring prediction randomness under input perturbations.
However, this race between attacking and defense continues with DEBA as one of the new attacks that can bypass existing defenses through invisible trigger embedding in the course of the training process.
Given the escalation of surreptitious model corruption and the need for DNNs to be used reliably and securely, evaluating robustness against the emergence of the latest defenses is quite important.
The proposed attack assumes the attacker can poison a portion of the training data without controlling the model architecture or training process.
During inference, attackers can only manipulate inputs.
DEBA utilizes singular value decomposition (SVD) to decompose images into singular values and vectors capturing structural information.
By replacing the smallest singular values/vectors of clean images with those from trigger images, DEBA embeds imperceptible triggers, retaining the major features of clean images while injecting minor trigger details.
This process enables generating poisoned images effective for targeted mispredictions during inference while appearing indistinguishable from benign samples.
The attack is evaluated under the threat model of data poisoning during training but restricted test-time access, demonstrating high attack success and robustness against existing defenses through its covert trigger embedding approach.
DEBA conducts this invisible trigger embedding in the UV color channels for enhanced efficiency and imperceptibility.
Comprehensive experiments demonstrate DEBA’s superior attack success rates and invisibility compared to prior attacks.
Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.