Poppy Gustafsson describes the past 10 years as “exhilarating”, but when asked if, when she helped set up Darktrace in 2013, she ever thought in her wildest dreams that she would have the ear of prime ministers and presidents, she says she would much rather look forward than back.
“You don’t really reflect, you tend to be looking forward. But yes, it is amazing. We’re a publicly listed company now, that’s a huge achievement, and we’re contributing to this conversation around AI [artificial intelligence], but you’re always looking to the next 10 years and everything you need to achieve and deliver,” she says. “It’s not as if you ever stop to think, ‘Right, smashed that, nailed it.’”
On this particular breezy autumn day at the end of October, as we sit down to chat, Gustafsson has a lot to look forward to, for it is the eve of the much talked about AI Safety Summit at Bletchley Park, convened by prime minister Rishi Sunak, at which the government hopes to begin to establish global consensus on a shared approach to mitigating the risks of AI.
As CEO one of the UK’s most prominent pioneers in the field and sitting pretty at the intersection of cyber security and AI, the stars have truly aligned for Gustafsson and Darktrace in 2023, and she has a lot to say on these issues. She is acutely aware of the responsibility Darktrace bears, and that she now has the opportunity to shape and guide the conversation in a meaningful way.
“This is what we are here to do, so I feel really proud to be part of an industry that is allowing businesses to embrace all the innovation that will come out of AI, but do so securely,” she says. “We’re like the seatbelt that allows them to explore that journey ahead of them safely.
“I think it’s natural as a species that we are quite good at imagining risks, and we’re less good at imagining opportunities. [But] my view is I’m a massive AI advocate. I think it’s a hugely exciting technology. It underpins everything we do at Darktrace, and I’d love to see society adopt more of it,” says Gustafsson.
“But technology is always adopted quickly when people feel it’s safe and secure, [so] the fastest way to get that innovation into people’s hands is by making it safe. I don’t think technology adoption and safety and regulation are competing. I think they’re two things that go hand-in-hand to make sure people can adopt AI securely.”
“I’m a massive AI advocate. I think it’s a hugely exciting technology. It underpins everything we do at Darktrace, and I’d love to see society adopt more of it”
Poppy Gustafsson, Darktrace
In future, she says, the cyber community may look back on 2023 as the dividing line between “pre-GenAI” (generative AI) and “post-GenAI”, and she is certain that the pressure on security teams will only ramp up as novel, AI-enabled threats become more widespread. “I like to say the novel will become the new normal,” she says. “That’s what we were designed to protect against.”
Security, accountability, trust
Arriving at Bletchley Park, the Buckinghamshire country estate – since consumed by the suburbs of Milton Keynes – where the Allies cracked the Nazis’ Enigma and Lorenz ciphers, Gustafsson brings a threefold agenda to the table.
Her highest priority item is, fairly predictably, cyber security, and driving forward the narrative around how we go about not only securing the datasets that AI models are trained upon, but building in privacy by design to ensure AI models are trained on data that they have meaningful consent to use in that way.
“You can’t just copy every article written by a journalist from the last 10 years and then create content as though it was seeming to be from them without consulting them. They should be part of that process,” she says.
Hand in hand with cyber security goes control. When AI works best, she explains, it will serve as an enhancement to our own skills by helping us accomplish our tasks better by doing some of the heavy lifting and increasing our productivity, but it will be vital for humans to retain an element of control tailored to the task the AI is performing.
“There needs to be a human failsafe mode, such that if the worst was to happen, AI can revert to a human who can take control and intervene if they think they need to. We can’t afford to entirely outsource our challenges to AI and lose our skills,” she says.
The third element in this triad brings us to trust. “The technology that is most readily adopted will be the one that is most trusted – I do firmly believe that,” says Gustafsson.
“If you think about how we build trust as humans, it’s by building relationships and storytelling and all that good stuff. AI needs to do the same – it needs to articulate how it comes to decisions that it provides in context, bringing the human along so they find a level of relatability.”
This raises the question of how trust can be built in a meaningful sense, avoiding the appearance of the AI marking its own homework and responding to prompts with something akin to, “Of course I’m trustworthy, Dave, but I still can’t open the pod bay doors”.
Gustafsson’s approach to this challenge is not a hypothetical one, it’s one Darktrace has direct experience of addressing.
Poppy Gustafsson, Darktrace
“Our product is taking actions within a business entirely autonomously to interrupt in-progress cyber incidents … [so] the security teams really have to trust that the technology is doing the right thing. We had to go on that journey,” she says.
“A lot of it was around how you articulate what the AI is doing and why it has come to the decisions it did. For us, it’s all in the UI [user interface]. We recruited a games designer really early on to visually show everything the technology is doing … And when the human looks at it, they think, ‘Yes, I can see all of those indicators that come together to build a bigger picture that something unusual is happening. I can relate, I have context.’ That’s what I mean, it’s showing your working.”
Keeping AI inclusive
Ahead of the AI Summit, a range of voices from civil society – including the likes of the Trade Union Congress (TUC), the Open Rights Group (ORG), Mozilla, Amnesty International and Tim Berners-Lee’s Open Data Institute – criticised the government for excluding groups most likely to be affected by AI, branding the event a closed shop dominated by big tech and saying it was a “missed opportunity”.
Nobody can dispute that bringing a range of voices to the table to ensure the future of AI is diverse, ethical and committed to social justice is the right thing to do. This said, Gustafsson doesn’t entirely agree that the event will not centre diverse voices.
“I think the summit has done a good job of pulling together not just business, but the research community, the governments, the regulators. So I do think there is diversity in the voices that are there,” she says.
“But I [also] think that when it comes to AI more broadly, it [diversity] is about skills, and I would really push back against an assumption that the AI skills of the future are what we’re familiar with as the technology skills of the past.”
Gustafsson, who in 2021 was named as Computer Weekly’s Most Influential Woman in UK Tech, is a firm believer that one of the key ingredients of a successful technology company is a broad skillset within its workforce – not just coders and technologists, but designers, historians, mathematicians, scientists and writers. This is something she has tried to bring to bear within Darktrace.
People need to be “less apologetic” when their skills are not of a technical nature. “Just because you don’t understand the ones and zeros doesn’t stop you thinking about how you can use that technology to solve different problems,” she said at the time.
She points now to some of the skills that are going to be needed to help engender trust in AI systems among humans – communication, storytelling, and so on – the very skills for which she hired gaming creatives into Darktrace.
“There is such a broad spectrum of skills there that are no longer just going to be writing code because, let’s be honest, AI will probably write its own code,” she says. “That’s what I would love to see, whenever we talk about AI skills in the future, let’s not assume that they’re computer science degrees.”
Conversation starter
Overall, Gustafsson says she is confident that some progress can be made at the AI Safety Summit. “I’m pleased that we’re just having a conversation, I’m pleased that it feels very collaborative,” she says.
“There is going to be international collaboration needed here because technology doesn’t stop at the borders of countries, everyone is connected. I’m pleased that this is a conversation we are having.
“I want to make sure that we are thinking about the opportunities as well as the risks. It’s right that we cover them [risks] off, but let’s not stymie innovation. Regulations should be there to support us and make it safe and look after society, but we can’t let that get in the way of us taking advantage of the opportunities.”
She concludes: “It’s our future selves that we’re protecting. We don’t want to be in a position where we look back in 10 years and think, ‘Goodness me, if only we’d done that back then.’ And it’s difficult because none of us have a crystal ball, but just starting the conversation and reflecting and thinking about it is really important.
“Technology is always unpredictable. It’s difficult to regulate when you don’t know which direction it’s going to head in, but if we were to know what it would look like if it were to start to go wrong and understand that now, then if we see the early signs of that we will hopefully be able to respond more quickly to course-correct.”