Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cyber security resilience and discriminatory output, according to data seen by Reuters.
The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022.
The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around “general-purpose” AIs (GPAI).
Now a new tool, which has been welcomed by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories, in line with the bloc’s wide-sweeping AI Act, which is coming into effect in stages over the next two years.
Designed by Swiss startup LatticeFlow AI and its partners at two research institutes, ETH Zurich and Bulgaria’s INSAIT, the framework awards AI models a score between 0 and 1 across dozens of categories, including technical robustness and safety.
A leaderboard published by LatticeFlow on Wednesday showed models developed by Alibaba, Anthropic, OpenAI, Meta and Mistral all received average scores of 0.75 or above.
However, the company’s “Large Language Model (LLM) Checker” uncovered some models’ shortcomings in key areas, spotlighting where companies may need to divert resources in order to ensure compliance.
Companies failing to comply with the AI Act will face fines of 35 million euros ($56.6 million), or seven percent of global annual turnover.
Mixed results
At present, the EU is still trying to establish how the AI Act’s rules around generative AI tools like ChatGPT will be enforced, convening experts to craft a code of practice governing the technology by spring 2025.
But the test offers an early indicator of specific areas where tech companies risk falling short of the law.
For example, discriminatory output has been a persistent issue in the development of generative AI models, reflecting human biases around gender, race and other areas when prompted.
When testing for discriminatory output, LatticeFlow’s LLM Checker gave OpenAI’s “GPT-3.5 Turbo” a relatively low score of 0.46. For the same category, Alibaba Cloud’s “Qwen1.5 72B Chat” model received only a 0.37.
Testing for “prompt hijacking”, a type of cyberattack in which hackers disguise a malicious prompt as legitimate to extract sensitive information, the LLM Checker awarded Meta’s “Llama 2 13B Chat” model a score of 0.42. In the same category, French startup Mistral’s “8x7B Instruct” model received 0.38.
“Claude 3 Opus”, a model developed by Google-backed Anthropic, received the highest average score, 0.89.
The test was designed in line with the text of the AI Act, and will be extended to encompass further enforcement measures as they are introduced.
LatticeFlow said the LLM Checker would be freely available for developers to test their models’ compliance online.
Petar Tsankov, the firm’s CEO and cofounder, told Reuters the test results were positive overall and offered companies a roadmap for them to fine-tune their models in line with the AI Act.
“The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models,” he said.
“With a greater focus on optimising for compliance, we believe model providers can be well-prepared to meet regulatory requirements.”
Meta and Mistral declined to comment. Alibaba, Anthropic, and OpenAI did not immediately respond to requests for comment.
While the European Commission cannot verify external tools, the body has been informed throughout the LLM Checker’s development and described it as a “first step” in putting the new laws into action.
A spokesperson for the European Commission said: “The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements.”