The security interviews: Exploiting AI for good and for bad


Artificial intelligence (AI) is on the rise. People are discussing the onset of Terminator-style AI that obliterates mankind – and not only as a joke. Those with less apocalyptic viewpoints consider the risk of AI taking out a huge tranche of work that has traditionally been carried out by people. In turn this leads to civil unrest, an AI underclass, and Mad Max style breakdown in societal values. 

Regardless of your preferred movie analogy, there is no doubt AI will automate many things – both good and bad.

Max Heinemeyer believes AI is one of those phrases that can mean many things to different people. The chief product officer at Darktrace, says: “AI could be used by attackers, by the bad guys and bad girls,” he says. From a risk perspective he says: “AI brings a lot of automation to the table.”

Imagine a security operations centre with analysts watching attacks in real time, identifying what is being targeted and immediately racing to block the attack vector. AI automates everything – both from the perspective of automating attacks and automating defence: “We see that in the defence space where you can augment the human to detect attacks. You can then respond to them very easily with machine learning systems.”

Heinemeyer used to be an ethical hacker. Discussing what happens at Darktrace, he says: “Penetration testers and the red team think a lot about how an attacker could use machine learning to automate their processes to become more efficient to scale up their attacks and make them more successful.”

But now AI has moved beyond automation. Looking at large language models, which some industry experts see as representing the tipping point that ultimately leads to wide-scale AI adoption, Heinemeyer believes that an AI capable of writing code offers attackers the opportunity to develop much more bespoke and tailored, sophisticated attacks. Imagine, he says, highly personalised phishing messages that have error-free grammar and no spelling mistakes.

For its customers, he says Darktrace uses machine learning to learn what normal looks like in business email data: “We learn exactly how you communicate, what syntax you use in your emails, what attachments you receive, who you talk to, and when this is internal or external.We can detect if somebody sends an email that is unusual for you.”

Sophisticated attacks are becoming trivial

A large language model like ChatGPT reads everything that is on the public internet. The implication is that it will be reading people’s social media profiles, seeing who they interact with, their friends, what they like and do not like. Such AI systems have the ability to truly understand someone, based on the publicly available information that can be gleaned across the web. This is an area Heinemeyer and the researchers at Darktrace have begun investigating: “We gathered some data since the ChatGPT boom in December last year when it reached one million users.”

Darktrace discovered that email attacks are becoming much more sophisticated and personalised. He says: “It’s not just difficult for humans to spot these scams and attacks.” It’s becoming impossible, he warns: “What do you train people on if you tell them to look out for spelling mistakes and generic scams? Those days are over because there’s so much contextual information now available on the internet.”

The availability of large language models that are trained on public internet data also lowers the technical skills required to create targeted attacks, as Heinemeyer explains. A would-be attacker can simply ask to spearphish a business, and the AI breaks that down into a series of tasks.

“I need to understand who works there. I can use LinkedIn to find that out or Facebook. I need to gather contacts of employees from their social media profiles. I then need to write a bespoke email to get an employee to click on a link and I need to create a piece of malware that hasn’t been seen before. As an attacker, I would have to do all of these things myself which requires certain skill. The technology is still evolving and has its flaws and mistakes, but now it’s so much easier to create targeted attacks,” Heinemeyer warns.

You want to train your employees – they are the last line of defence
Max Heinemeyer, Darktrace

While he does not want to paint a Terminator scenario for AI, Heinemeyer does believes that a paradigm shift is happening, especially in email attacks. There are a number of areas that the wider enterprise software industry must start taking seriously. For example, an email from the expense management system or HR system is something everyone has experienced. While two-factor authentication is available in many enterprise products for user authentication, most enterprise software relies on the user’s internal email address, which hackers can quite easily figure out. It is becoming a trivial matter to create phishing attacks using email messages that look like they have been generated by commercial enterprise software.

“It’s all over the place,” says Heinemeyer. And even if internal systems are secure, attackers can and will target supply chains and use business partners to circumvent internal security measures. “I think you always want to have defence in depth,” he says . “You want to train your employees – they are the last line of defence.”

He urges industry leaders to improve the IT systems that are being rolled out to users: “You can’t put pressure on humans to spot every invoice scam and every internal system scam. How do you counter these attacks?”

AI has the potential to be used to deploy highly personalised attacks. For Heinemeyer, the industry needs to develop AI systems that can understand users better, to spot things that do not look quite right or appear to be at odds with normal behaviour.



Source link