DEF CON’s AI Village will host the first public assessment of large language models (LLMs) at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models.
The possibilities and the limitations of LLMs
LLMs offer countless ways to assist users’ creativity, but it also presents challenges, particularly in terms of security and privacy.
This event could shed light on the implications of using generative AI, a technology that has many potential applications but also potential repercussions that we are yet to fully comprehend.
During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test. They will do so on an evaluation platform developed by Scale AI.
“Traditionally, companies have solved this problem with specialized red teams. However this work has largely happened in private. The diverse issues with these models will not be resolved until more people know how to red team and assess them,” said Sven Cattell, founder of AI Village.
“Bug bounties, live hacking events, and other standard community engagements in security can be modified for machine learning model based systems. These fill two needs with one deed, addressing the harms and growing the community of researchers that know how to help.”
The aim of this exercise is to uncover both the possibilities and the limitations of LLMs. By testing these models, red teams hope to reveal any potential vulnerabilities and evaluate the extent to which LLMs can be manipulated.
The results of this red teaming exercise will be published, allowing everyone to benefit from the insights gathered.
Support from the White House
The support from the White House, the National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate, and the Congressional AI Caucus for the upcoming red teaming exercise is a clear indication of the importance they place on the use of LLMs. It also highlights the potential risks associated with such technology.
The Biden-Harris Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework are both vital initiatives aimed at promoting the responsible use of AI technologies. This red teaming exercise aligns with those initiatives.
“This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation,” the White House stated.