HackerOne has partnered with security and AI communities to advocate for stronger legal protections for independent researchers. Most recently, HackerOne participated in a workshop hosted by leading institutions to discuss the need for legal safeguards for third-party AI evaluators and address the gaps in current legal frameworks. Despite the strong push for change, the Librarian’s ruling provided some clarity, but ultimately fell short of granting the full legal protection requested for AI safety research.
What is the DMCA and Why Does it Matter?
DMCA Section 1201 makes it illegal to circumvent technological protection measures (TPMs) used to protect copyrighted works. Essentially, if software has security features, it’s against the law to break or otherwise bypass them—even for research purposes.
Every three years the U.S. Copyright Office considers petitions for exceptions to this restriction. In 2015, the security community advocated for and received an exception for good faith security research. This year, HackerOne advocated for broadening this exception.
While security research has legal protections under the law, it is not clear that the same protections extend to AI researchers. AI research, or red teaming, evaluates AI systems for more than just security – including safety, accuracy, discrimination, infringement, and other potentially harmful outputs. The absence of clear legal protections creates a chilling effect that may deter independent AI testing, which is crucial for the long-term resilience of the digital ecosystem—much like independent security research safeguards organizations by identifying vulnerabilities before they can cause harm.
AI platforms, in an effort to safeguard their systems, may block or ban researchers who attempt to find vulnerabilities or algorithmic flaws. In order to continue their work, researchers are sometimes forced to create new accounts or use proxy servers to bypass these access restrictions. While this circumvention is often necessary for identifying unintended behaviors and improving AI systems, in the absence of clarity around the DMCA 1201 exceptions, it comes with potential legal risk.
HackerOne joined the effort to request the Copyright Office to grant clear liability protection for good faith AI research under DMCA Sec. 1201. The process took several months and multiple rounds of comments before the Librarian of Congress issued its decision on October 28, 2024.
What Was the Ruling?
The U.S. Copyright Office considered a proposed exemption to the DMCA that would allow researchers to circumvent TPMs in order to test and improve the trustworthiness of AI systems. This exemption would have enabled independent researchers to probe AI models for biases, harmful outputs, and other issues related to fairness and accountability, without the threat of legal action.
However, the Librarian of Congress ultimately declined to grant this proposed exemption. The decision was based on two determinations:
- Insufficient Evidence: There was not enough evidence to prove that Section 1201 significantly deterred researchers from conducting the necessary red teaming and testing activities on AI models. While many researchers have raised concerns about the legal risks of conducting this type of research, the Copyright Office found that the existing framework of TPM circumvention protections did not present a significant barrier to their work.
- Non-Circumvention of TPMs: Many of the techniques employed by researchers do not actually involve circumventing TPMs in the way Section 1201 was intended to prohibit. According to the ruling, most of the research methods in question do not technically involve bypassing access controls or security measures, which means they do not fall under the DMCA’s anti-circumvention provisions.
The Implications for AI Research
While the rejection of the full exemption for AI trustworthiness research is a setback, it does provide some clarity in certain areas. The decision clearly states that many common testing methods, such as post-ban account creation, rate limiting, jailbreak prompts, and prompt injection, do not violate Section 1201. This clarification is a win for researchers, as it helps to reduce the uncertainty around these techniques and provides more legal confidence to pursue this critical AI research.
However, the ruling ultimately leaves AI researchers operating at times in a legal gray area which may result in an inability or unwillingness to fully test AI systems independently, especially in cases where flaws are deeply embedded in the technology.
As AI continues to evolve and impact all aspects of society, legal frameworks must evolve alongside these technological advancements. The additional clarity provided is welcome, but there is still much to be done to secure stronger, more comprehensive legal protections for good faith AI researchers.