The Security Interviews: Jason Nurse, University of Kent

The Security Interviews: Jason Nurse, University of Kent

Jason Nurse, reader in cyber security at the University of Kent, firmly believes the blame for cyber weaknesses needs to shift towards how systems are made rather than pointing the finger at users, stating: “Five or 10 years ago, security experts were saying the user is the weakest link, that users are stupid. Thankfully, I don’t really hear that any more – or, if I hear it, people call it out, pointing out that if the system was built better, the user wouldn’t have to find a workaround. That’s important to understand.”

Nurse conducts research around a variety of cyber security issues that affect organisations and governments, but much of his work is focused around something which is often missed in conversations, reports and research papers around cyber security – psychology.

For all the analysis of cyber attacks and incidents, they still tend to focus on CVEs and cyber criminal gangs rather than the people who are affected when ransomware forces hospital appointments to be cancelled or a cyber attack results in empty shelves in supermarkets. Nurse is eager to voice a different perspective on what matters in cyber security.

“For years, people have focused on security as technology, making the technology better, making it more advanced. I completely agree that technology plays a key role, but it’s also key to focus on technology as it relates to humans and people,” he says. “How we engage with technology, how we’re able to explore people’s behaviour, how we’re able to understand how people are constantly being socially engineered and exploited – and then of course, what do we do about that?”

In addition to his work in academia, Nurse is also director of science of research at CybSafe, a cyber security company which aims to help reduce cyber risk in the workplace by measuring and influencing security behaviours. Cybsafe partnered with the US National Cybersecurity Alliance to produce its annual Cybersecurity attitudes and behaviours report, surveying thousands of people to examine how behaviours and attitudes shape security risk.

Some of the key findings in the report include how 44% of people believe that staying safe online is intimidating, while only 48% of respondents said they completed cyber security training at work in the past year. The most common reasons people gave for not completing training were “I already know enough” (23%) and “too busy” (22%). Could it be that there’s something wrong with cyber security training?

Cyber security vs users

During his presentation at Infosecurity Europe 2025, Nurse polled the audience of cyber security professionals on their preferred method of cyber security training. The response was overwhelmingly in favour of games and gamification.

Nurse then revealed the results of the survey, which proved a shock to the audience: of the methods of training on offer, games and gamification was by far the least popular method of cyber security training for users, with only 11% of people stating that this was their preferred way of learning about cyber security.

Meanwhile, the most popular way users say they want to receive cyber security training is via video or written content – which the cyber security professionals in the audience were the least likely to vote for. If the people issuing the training aren’t catering to user needs, it’s no wonder that users don’t take in cyber security awareness training.

“It’s a complicated issue,” says Nurse. “There’s this disparity between what we think is best and then what the users think is best. I might think I know what’s best, but that might not be what’s best for you. Are users’ perceptions of what works for them correct?” 

Much of corporate cyber security training remains based around warnings about things which can go wrong, such as phishing scam, or payment fraud. It doesn’t help that many organisations still treat a mistake as something the employee should be punished or mocked for – especially if phishing tests are actively trying to deceive employees. “People just want to get on with their jobs,” adds Nurse.

Returning to the concept of cyber security professionals placing blame on the users for incidents, Nurse is keen stress that this isn’t the right attitude, especially when so much of the internet and internet-connected technology and applications have been built with security as an afterthought or something that’s bolted on afterwards – if it’s bolted on at all.

“When the internet was first built, security wasn’t a priority – it was added afterwards. And we still see that with certain new technologies, security is added on after. But we’re getting better at that with secure-by-design and similar concepts, making a difference in how technology is being built,” he says.

However, what also needs to be taken into account is the idea of making something too secure. If the users find this ultra-secure product too difficult to use, they’ll look for alternative methods to get around it – and as demonstrated with shadow IT, when employees use their personal cloud accounts rather than the authorised enterprise accounts, this can bring its own risks.

“We can do more with ensuring systems are built with users in mind. There needs to be a balance between security, functionality and usability – those three components are really critical,” says Nurse. “If something is too secure, it risks being unusable, which means you may have the issue of workarounds. If something is of use but not secure, of course it’s going to be exploited, so there has to be a balance.”

Nurse suggests that the answer to this could be involving users in the development cycle, testing the new application or product to ensure that it truly is built with them in mind, while also ensuring security, functionality and usability are balanced well.

“That’s critical because having users involved ensures that you have that touchpoint. If you involve them throughout, you can try to ensure that what’s built fits the user’s needs and hits the requirements around security and privacy,” he adds.

User safety and responsible usage in the age of AI

Several new technologies have emerged over the two decades which have all made the mistake of not thinking about user security and safety from the start. Think about social media, smartphones, the internet of things (IoT), all of which emerged, only for security issues to be considered once they were already in the wild. This cycle is still ongoing and now arguably moving faster than ever before.

“We’ve seen it time and time again and now it’s happening with AI,” says Nurse. “And AI is moving so quickly, it’s really blowing people away with the impact it has and the impact it will have, and it exposes us to potentially increased risk.”

It’s that speed of adoption which makes managing the risks around AI difficult. Whether it’s through approved enterprise solutions or employees using their personal ChatGPT accounts, AI is in the workplace and wider society.

But many users aren’t thinking about the potential security and privacy risks around it. People are entering sensitive business information into AI tools to help them with their work – yes, it helps with efficiency, but given the black box nature of so many AI models, this could put businesses at risk of breaches or worse.

For Nurse, it comes back to the human level, ensuring that people understand what AI is, how it works and the potential risks around it – and encouraging responsible usage.

“People are just using AI like any other tool without properly thinking about the consequences or if they should be using it in the way they do,” he says. “It’s a domain that we really need to focus on – the risk in the workplace, the risk in the personal space – and there’s lots to be unpacked around what’s safe AI use and what’s ethical AI use.”

However, he’s also keen to stress that the burden of managing risk shouldn’t be left to the users. Far from it – the AI companies must take responsibility as well by placing appropriate guardrails and safety measures on their products.

“Guardrails is a really interesting topic,” says Nurse. “Some AI models have better guardrails than others. If you ask an AI to create a phishing email, some won’t do that, Some will create it for you and some will create it if you circumvent the guardrails around asking the question.”

For Nurse, whether it’s around AI or it’s around cyber security, the important thing is that those responsible for building and then securing technology and software think about the people who are using it. Because without understanding not only how people use technology, as well as their behaviour and attitudes towards it, it’s going to be difficult to keep people safe and secure.

“We need to spend time and effort on understanding behaviours and better appreciating that behaviour is the outcome of a complex network of variables that interact with each other. Behaviour can be informed by people’s attitudes, culture, opportunities, motivations and social norms – there’s so many things that can inform behaviour that we need to understand, especially with regards to cyber security,” says Nurse. “Understanding those basics is important for how we approach security.”


Source link