Israeli company Irregular, previously known as Pattern Labs, on Wednesday announced raising $80 million for its AI security lab.
Founded by Dan Lahav (CEO) and Omer Nevo (CTO), the company has created what it calls a frontier AI security lab that puts artificial intelligence models to the test.
Irregular can test models to determine their potential for misuse by threat actors, as well as the models’ resilience to attacks aimed at them.
Irregular, which claims it already has millions of dollars in annual revenue, says it’s building tools, testing methods, and scoring frameworks for AI security.
The company says it’s “working side by side” with major AI companies such as OpenAI, Google, and Anthropic, and it has published several papers describing its research into Claude and ChatGPT.
“Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful,” said CEO Lahav. “AI capabilities are advancing at breakneck speed; we’re building the tools to test the most advanced systems way before public release, and to create the mitigations that will shape how AI is deployed responsibly at scale.”
The cybersecurity industry regularly demonstrates attacks against popular AI models. Researchers recently showed how a new ChatGPT calendar integration can be abused to steal a user’s emails.
Related: RegScale Raises $30 Million for GRC Platform
Related: Security Analytics Firm Vega Emerges From Stealth With $65M in Funding
Related: Ray Security Emerges From Stealth With $11M to Bring Real-Time, AI-Driven Data Protection
Related: Neon Cyber Emerges From Stealth, Shining a Light Into the Browser