
- Technical analysis: Expert forensic review of audio and video content to determine whether the content has been manipulated and to generate forensic proof for stakeholders.
- Legal support: The ability to act once harmful content has been identified, including coordinating takedown requests by working with legal experts to support the removal of malicious or defamatory content from online platforms.
- Clear communication: Public relations and communications support to help organizations craft effective messages for employees, investors and customers during a rapidly evolving incident.
The path forward: Authentication as the end state
In the long term, addressing deepfakes will likely require broad adoption of authentication and watermarking standards, like how web browsers display a lock icon to signal a secure, authenticated connection. For example, organizations may soon embed watermarks in official communications, such as press statements, interviews and earnings calls.
Yet, watermarks will not resolve every challenge. Some authentic content, like revelations from whistleblowers, will inevitably circulate without official marks. Attackers will still be able to fake this kind of content, leaving us in a continual cat-and-mouse game, in which journalists and forensic experts must draw on alternative sources and advanced tools to verify materials. Establishing trust in digital media will remain an ongoing process, as both attackers and defenders adapt.
For business and risk professionals, the takeaway is clear: True resilience no longer depends on heuristics and trusting what we see or hear. It depends on how quickly organizations can verify reality, coordinate a response with expert support and resources and restore trust before misinformation becomes the dominant narrative.
