CISO Conversations: John ‘Four’ Flynn, VP of Security at Google DeepMind


DeepMind, an AI research laboratory founded in London in 2010, was acquired by Google in 2014. In April 2023, it merged with the Google Brain division to become Google DeepMind.

John Flynn, usually known as ‘Four’, has been DeepMind’s VP of security since May 2024. Before then he had been a CISO with Amazon, CISO at Uber, director of information security at Facebook, and (between 2005 and 2011) manager of the security operations team at Google.

What made him focus on a career in cybersecurity? Cybersecurity wasn’t a common occupation when he graduated, but two life experiences had converged. 

First, he was “obsessed with computers from an early age.” He got his first computer when he was 13. “I would spend all day and all night hacking on stuff, teaching myself to code, and trying to solder things onto my computer to make it play the latest game that was beyond its native power.”

Second, he grew up in violent locations. He mentioned he had lived in Nairobi (Kenya had relatively recently achieved independence from Britain following terrorist activity led by the Mau Mau); Liberia (which suffered two civil wars between 1989 and 2003); and Sri Lanka (civil war between the Sinhalese majority and the ‘Tamil Tigers’ from 1983 to 2009). More specifically, he remembers tear gas in the playground and his school getting burned down. 

Physical security was at a premium for the young Flynn. “That and my obsession with computers gradually focused my interest on cybersecurity.” He went on to gain a master’s degree in computer science.

John ‘Four’ Flynn, VP Security at Google DeepMind
John ‘Four’ Flynn, VP Security at Google DeepMind

It is surprising how many of today’s security leaders first learned about cybersecurity through childhood game hacking. This raises a question – should a security leader be a hacker at heart? It should be said that there are many opinions on what makes a hacker – see the separate Hacker Conversations for examples – but Flynn replied, “If you say that a hacker is somebody who loves to explore and test the limits of new technologies, then the answer is ‘yes’.”

He expanded, “My personal brand of CISO is a very technical one with an engineering background, and that skillset combined with testing limits allows me to bridge the risk side of the equation with the intentions of the developers. It helps us find novel solutions to addressing risk while enabling customers and employees to do what they need to do.”

How, then, did this engineering technologist with a hacker’s mindset end up with one of the world’s leading artificial intelligence research organizations? “It’s really quite simple,” he said. “I’ve always wanted to help people with what I do.”

Advertisement. Scroll to continue reading.

It’s perhaps worth noting that before he started his cybersecurity career, he was a Peace Corps Volunteer and still lists health and human rights among his interests.

“It’s quite easy to feel that working in cybersecurity can benefit your employer, but it’s less easy to find and feel that what you do is a benefit to humanity at large.” Some years ago, he recognized a fledgling AI was unfolding its wings and would benefit, or at least affect, all of society and not just businesses. 

“This is the most important technology that’s been introduced to humanity in a long time, and there are many questions on how to make it secure and safe. I felt I needed to be part of it – to try to help with that process; and I feel like DeepMind is the single best place in the world to do that. DeepMind isn’t simply trying to invent the future of AI, but to do so in a way that will help and empower humanity in a safe manner. I just had to drop everything and do it.”

He’s really talking less about what we have now (gen-AI and agentic AI) but more about the next big step: artificial general intelligence, or AGI. This is artificial intelligence with the ability to understand, learn, and apply intelligence across different domains. It will effectively be proactive AI where we are currently restricted to reactive AI. And that will be a whole new ball game in an arena where humanity has yet to understand the social, psychological and economic effects of what we already have with gen-AI.

We wondered, given his interest in human rights, whether he saw any conflict between human rights and artificial intelligence. “I don’t know that I can comment on any conflict,” he said, “but I think the important point is that AGI technology is coming. Many people are working on that. And if I can do my bit to shepherd the technology of the future in a way that’s as safe as possible, I think I will feel good about my contribution.”

Given that current AI still makes mistakes, we would be remiss if we missed this opportunity to challenge a senior officer from a major AI research organization on the subject of AI errors. The common answer is that some mistakes are inevitable since gen-AI is fundamentally a probabilistic engine – it replies with what it believes to be probably the most correct response.

But the very existence of ‘probability’ has been questioned. Probability involves randomness; it is God playing dice with outcomes. In a different but relevant context Einstein effectively said God doesn’t play dice. The underlying suggestion is that probability is a term applied to determinism we do not (perhaps yet) understand.

It’s an important but unresolved question, because it implies that the probability in Ai that leads to its errors could be resolved if we understood the determinism underlying the probability: if we know exactly why an error is made, we could prevent a repetition in the future.

This is the question we put to Flynn: are we disguising our insufficient understanding of how AI works by ‘dismissing’ it as a probability machine?

“I think I would say probabilistic is an apt description,” he replied. “That description puts it aside in relation to historic cybersecurity which is arguably more deterministic than the novel challenges we face with AI. For example, you can give the same prompt to the same AI and get two different answers. That happens quite frequently with the way AI works. Probabilistic is an easy way to understand this phenomenon. It also lends itself to different ways of thinking about defense against attacks – so I would say that probabilistic is a fair description.”

Flynn uses the word probabilistic to differentiate AI applications from traditional and more clearly deterministic classic computer applications. But an alternative way of looking at the issue would be to define AI outputs as ‘chaotic’ (from chaos theory). Chaos theory suggests that complex and dynamic systems are deterministic but unpredictable making AI unpredictable rather than probabilistic. It’s an attractive idea since it contains the possibility that if we understand the effect of all the variables that make up the system, we could potentially predict and ultimately improve the accuracy of AI. A second implication from chaos theory is that this is unlikely to happen.

An open question today is whether the advent of AI is changing the role of the modern CISO. Cybersecurity originally emerged as a separate discipline from information technology – and early CISOs tended to be technologists and engineers. The discipline itself carried its history in the original name: ITsecurity.

As malicious threats grew in number and complexity, the need for separate cybersecurity expertise became apparent; but it was still largely grounded in IT. The threats, however, were rapidly becoming ‘whole of business’ threats rather than simply threats to computer systems. No part of the business would be untouched by cybersecurity, which in turn forced CISOs to understand business priorities.

So CISOs were forced to expand their expertise and become businesspeople as well as technologists. ‘Businessperson’, however, is a simplistic summation. To integrate technology and security across the whole business, CISOs also need to be psychologists. They need to understand business leaders and employees (and be able to talk coherently to both); to understand how and where attackers might strike, predict how staff might react to work restrictions in workflows, and be subtle enough to get what they need from the board without losing their job.

So, the modern CISO must be both technologist (engineer) and a psychologist (business). Will this change again with the advent of AI. Does today’s CISO now also need to be a scientist?

Flynn is a technologist by academic training (computer science) and accepts the role of psychology. Internally it is an essential trait for all leaders, and externally it is useful in tracking adversaries. But he doesn’t consider himself to be a scientist even though the role increasingly involves science. “I don’t try to pretend to be one myself, but I have scientists on my team.”

As for the science of AI, he said, “I had become so passionate about AI over the last several years that I obsessively taught myself, much of it on the side. I found that coming into DeepMind, there was more learning to do, but a year on from starting in the role, I feel comfortable both on the security side and on the research side.”

The CISO may not need to be a scientist, but a scientific mindset should be added to technology and psychology – and what is missing at the outset must be learned on the job. This is somewhat confirmed by what he considers to be the most important personality characteristic for a CISO.

“Humility is the first thing that comes to mind,” he replied. “In security, and especially in AI security, we need to contend with a lot of unknowns, and we’re still working our way through some of the solutions as a society. I’ve seen many leaders in security where hubris gets in the way of seeing what is and what isn’t a good solution to a problem. I think humility is an important training in all leaders – and especially in security.”

Humility seems to be a natural part of Flynn, perhaps partly due to surviving a surprisingly dangerous youth. But advice received from mentors in the progress of a career is also important.

“Probably the best advice I ever received is this,” he said: “The role of a leader is really two things, Firstly, to hire the best people in the world; and secondly, to make sure they have the right context to do their jobs effectively. If you do those two things, a lot of problems are solved or prevented.”

Too often he has seen only the first part. Leaders hire great people but then leave them to work out what to do on their own. “They end up siloing information in their own minds; so, I make an effort to pass information down to my team just as much as I do to hire the best people out there. It’s worked for me.”

CISOs aren’t simply mentees on their journey – they are mentors on their arrival. “I think the only thing I would add to what we’ve already talked about,” he said, “an anti-pattern I see in many security practitioners is that they lack basic curiosity.” (An anti-pattern is a common but frequently ineffective and potentially counterproductive response to a typical problem. A lack of curiosity is an anti-pattern to a successful career in cybersecurity.)

“If interested in being at the top of your field in the longer term,” he continued, “you should spend your nights and weekends learning and playing with this technology – you shouldn’t wait for somebody to teach you.”

He thinks security has somehow lost some of this driven curiosity. “In the beginning, when I started, the only people that were crazy enough to do this job were people who were obsessed and would just spend nights and weekends trying to hack things or learn how to break things or learn how protocols worked. And I guess I sometimes feel we’ve lost some of that over the years, that base level of just passion and curiosity.”

Passionate curiosity, he suggests, is a route to success. “If people are not passionate and trying to understand all the details, they generally aren’t as successful as other people who obsess over the details to understand everything from top to bottom. The best people in any field are the ones with insatiable curiosity over anything new – and this emerging AI era lends itself to that driving curiosity about computing that existed 25 years ago.”

An important insight we can all gain from top CISOs, given their wide-angle view of what exists and what is coming, is an informed view of current and imminent threats. Flynn believes it is less the threats but their delivery that is changing. “Yesterday’s threats are still present today – elite nation state attacks, extortion, IP theft and so on,” he said. “And they’ll continue tomorrow. But my focus is keeping an eye on how AI enhances attackers’ ability to conduct their attacks.”

The likelihood is cybersecurity will become a battlefield where defensive use of AI will seek to mitigate the malicious use of AI. So, is AI a threat or a benefit to cybersecurity? “Both,” said Flynn. “On the threat side, it will enhance people’s ability to conduct cyberattacks.” There will be more, and more sophisticated attacks as a matter of course.

“On the flip side,” he continued, “it is important to note that AI is a big part of the solution to both the problems that it introduces, and the legacy problems that have been historically difficult to counter with traditional security. For example, some of the products we’re working on include the detection of vulnerabilities in code, getting those vulnerabilities fixed automatically, and creating more secure code out of the box. The intention is that when people have code generated by an AI system, it is intrinsically more secure than traditional human coding.”

In short, AI not only introduces new risks, but is also a major component of the solution to both those risks and the historical risks we’ve been working on for many years. 

Related: CISO Conversations: Maarten Van Horenbeeck, SVP & Chief Security Officer at Adobe

Related: CISO Conversations: Jaya Baloo From Rapid7 and Jonathan Trull From Qualys

Related: CISO Conversations: LinkedIn’s Geoff Belknap and Meta’s Guy Rosen

Related: CISO Conversations: Nick McKenzie (Bugcrowd) and Chris Evans (HackerOne)



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.