Nearly 500 scientists and researchers have signed an open letter warning that the latest version of the EU’s Chat Control Proposal would weaken digital security while failing to deliver meaningful protection for children. The signatories represent 34 countries and include well-known cryptographer Bart Preneel of KU Leuven, along with researchers from leading institutions such as ETH Zurich, Johns Hopkins University, and the Max Planck Institute for Security and Privacy.
The letter responds to a revised draft of the regulation, published on July 24, which narrowed the scope of scanning requirements to images and URLs. Earlier drafts included detection of text and audio communications. While the researchers welcome certain changes, such as provisions for faster reporting and removal of abusive material, they argue that the fundamental flaws remain.
Main concerns with detection technology
The researchers state that the technology required by the proposal cannot reliably detect known or new child sexual abuse material (CSAM) at the scale of hundreds of millions of users. They warn that current systems produce too many false positives and false negatives to be effective. Even minor alterations to an image can bypass state-of-the-art detectors, and evasion tactics are expected to change quickly if scanning becomes mandatory.
The letter also challenges the plan to use machine learning to identify previously unseen CSAM. According to the authors, there is no evidence that AI can distinguish CSAM from other private images, such as consensual photos between teenagers, with the accuracy needed for enforcement. They caution that such systems will be prone to mistakes and easily manipulated by those intent on sharing illegal material.
Risks to encryption and privacy
A central objection is that on-device scanning is incompatible with end-to-end encryption (E2EE). The researchers explain that scanning private data before encryption introduces a single point of failure and gives external parties access to data meant to remain private. They argue that this approach would erode the security of messaging apps like Signal and WhatsApp, which are used by citizens, journalists, politicians, and law enforcement.
Signal has already stated it would withdraw its service from the EU if the regulation requires mandatory on-device scanning. The researchers also raise concerns about function creep, noting that the same technology could later be used to scan for other types of content, such as political messages.
Concerns over age verification
The proposal includes mandatory age verification and assessment measures. The letter argues that these controls are easily evaded through VPNs or alternative services. More importantly, mandatory age verification could undermine online anonymity and freedom of expression, as well as create dependencies on untested solutions provided by large technology companies.
Suggested path forward
The researchers call for a shift away from what they describe as a “techno-solutionist” approach focused on scanning. Instead, they recommend proven measures such as education on online safety and consent, trauma-sensitive reporting hotlines, and faster takedown of illegal content. They stress that eliminating abuse requires addressing its root causes rather than weakening digital security for everyone.
Source link