Here’s How Violent Extremists Are Exploiting Generative AI Tools


“We’re going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale,” Hadley says. “We’re confident that gen AI can be used to defend against hostile uses of gen AI.”

The partnership was announced today, on the eve of the Christchurch Call Leaders’ Summit, a movement designed to eradicate terrorism and extremist content from the internet, to be held in Paris.

“The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences,” Brad Smith, vice chair and president at Microsoft said in a statement. “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off.”

While companies like Microsoft, Google, and Facebook all have their own AI research divisions and are likely already deploying their own resources to combat this issue, the new initiative will ultimately aid those companies that can’t combat these efforts on their own.

“This will be particularly important for smaller platforms that don’t have their own AI research centers,” Hadley says. “Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.”

The threat of AI generative content is not limited to extremist groups. Last month, the Internet Watch Foundation, a UK-based nonprofit that works to eradicate child exploitation content from the internet, published a report that detailed the growing presence of child sexual abuse material (CSAM) created by AI tools on the dark web.

The researchers found over 20,000 AI-generated images posted to one dark web CSAM forum over the course of just one month, with 11,108 of these images judged most likely to be criminal by the IWF researchers. As the IWF researchers wrote in their report, “These AI images can be so convincing that they are indistinguishable from real images.”



Source link