CyberWire

Going Deep on Deepfakes (feat. Hany Farid)



Welcome back to The FAIK Files!

In this week’s episode:

  • We sit down with deepfake expert Hany Farid to discuss the real-world harms of synthetic media.
  • Exploring the physics of deepfake detection and why real-time streams might be easier to defend.
  • The dangers of using AI to “enhance” images and hallucinate hidden details.
  • A look at solutions like C2PA, watermarking, and the pressing need for platform accountability.

Check out Hany’s company, Get Real Security here: ⁠https://getrealsecurity.com⁠

Learn more about C2PA at: ⁠https://contentcredentials.org⁠

Want to leave us a voicemail? Here’s the magic link to do just that: ⁠⁠⁠⁠https://sayhi.chat/FAIK⁠⁠⁠⁠

You can also join our Discord server here: ⁠⁠⁠⁠https://faik.to/discord⁠⁠⁠⁠

*** NOTES AND REFERENCES ***

What Keeps Hany Farid Up at Night?:

  • The rising harms of non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) generated by AI.
  • Voice cloning being weaponized for individual fraud and real-time deepfakes used by state-sponsored actors.
  • Why the specific tool (like Sora or face swap) matters less than the overall threat vector and resulting harm.

Deepfake Detection – APIs vs. Physics:

  • Hany’s work at UC Berkeley and his company Get Real Security.
  • Why detecting real-time manipulated video is actually easier than identifying well-crafted, file-based deepfakes online.
  • How physical camera imperfections (noise) differ from the artifacts introduced by AI upsampling and diffusion models.

The Danger of AI “Enhancement”:

  • Why using AI to “remove a ski mask” or enhance low-res footage is not like CSI—it’s just hallucinating statistically consistent pixels.
  • AI lacks a notion of uncertainty, leading to dangerous misidentifications and real-world harm.
  • The “Liar’s Dividend”: When the flood of AI slop makes people doubt the authenticity of real, unedited evidence.

Safeguards, C2PA, and Platform Responsibility:

  • The role of watermarks and C2PA content credentials acting as “nutrition labels” for digital media.
  • Clarifying that C2PA relies on signed credentials and a trust list, not the blockchain.
  • The dire need for social media platforms to enforce semantic guardrails and take responsibility for the content they amplify.
  • Find more of Hany’s work by searching YouTube for his lectures on “Physics-Based Photo Forensics.”

*** THE BOILERPLATE ***

About The FAIK Files:

The FAIK Files is an offshoot project from Perry Carpenter’s most recent book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions.

Check out Perry & Mason’s other show, the Digital Folklore Podcast:

Want to connect with us? Here’s how:

Connect with Perry:

Connect with Mason:



Source link