Nicole van der Meulen is cybersecurity innovation lead at SURF. Previously she worked as a Senior Strategic Analyst and Head of the Strategy & Development team at the European Cybercrime Centre (EC3) at Europol. She also held a variety of posts in the area of cybercrime and cybersecurity at the Dutch Banking Association, RAND Europe, VU University Amsterdam and the Dutch Ministry of Security & Justice. She holds a doctorate in law from Tilburg University and a master’s degree in political science with specializations in comparative politics and international relations from VU University Amsterdam. Her primary publications deal with digital identity theft and cybersecurity policy.
Can you tell a bit about your background in cybercrime and cybersecurity, particularly in your previous role at Europol. How have your experiences shaped your current approach to addressing cybersecurity challenges, especially regarding impersonation fraud?
I started working on identity theft and investigating how different societal actors indirectly facilitate identity theft, how they indirectly create opportunities for criminals to carry these out. I have worked for the Dutch government in what was originally the Computer Emergency Response Team and later on became the National Cyber Security Center. There I was in the knowledge and expertise center where we did the first cyber security threat assessment in 2011. I started looking broader at threats and that has been my main focus – how are threats evolving, how are criminals adapting their methods based on what is happening in society. Then I ended up in Europol, where I was focused on threat assessment but approaching it from a law enforcement perspective, looking into IO studies, organized crime threat assessment, as well as the industry partners. After four years, I came to the conclusion that the changes that occur – from a strategic level – are not that significant. We face a lot of the same threats year-in, year-out. And the law enforcement is at the very end – because a lot of times they get involved after somebody is already a victim. Their reach is limited – they do very important work but it is a very small piece.
You mentioned that despite the advancements in AI, there is no such thing as a new threat—only new ways to execute the same threats. Could you elaborate on this perspective? What does it imply for organizations in terms of their cybersecurity strategies?
When I was working for RAND Europe in Cambridge, which was almost 10 years ago, I did this study for LIBE committee of the European Parliament, and there I had to do a comparative study on different threat assessment and I created this model where you can connect every threat to the CIA – Confidentiality, Integrity, and Availability model, and everything starts with that – you can have different variants of that such as external access, or insider threats. At some point, I haven’t seen anything new. At the end of the day, the criminals want to gain access – either to an account, to a system or information – this stays the same. And they will consistently look for that. The changes happen mostly in terms of the methodology of how they gain this access and what they do afterwards. But the core won’t fundamentally change. In the past we considered something new because it was, in a way, revolutionary – I think ransomware was a bit revolutionary because of the encryption element and shutting the actual owner out. For me that was the last new thing. It would be curious to see as we move ahead – of course we anticipate adversarial machine learning. That might be new in a sense of unforeseen consequences. But other than that if you boil it down to the essentials, it is still the same as what we have seen before. Especially in the last five years, I have not seen anything surprising. And sometimes, we want to call something new or sophisticated to justify that we didn’t have a response to it.
Some impersonation fraud cases have complex motives beyond monetary gain, such as manipulating the public image of an individual or organization or even political agendas. Which types do you believe are currently the most prevalent, and which are the most dangerous in your opinion?
It is difficult to say what is prevalent – there are different sources – but I would say deep fake pornography is going to be the main danger – dangerous for the individual because of the element of shame. If you look at the problems that already exist – with sextortion, and sextortion of teenagers – there have been cases of teeneagers taking their life. So, if we talk about danger to life, I think that is the main one. Because of the impact of deep fake pornography on an individual – we have also seen this in South Korea where there was a massive wave of deep fake pornography in schools. The second angle is the politicians and elections. That is very dangerous because it has an impact where the truth is basically lost. That goes to trust – trust in society, trust in what is being said. If the people believe a fake video, they start to live in a different reality and you can no longer have an honest discussion or an honest conversation. And that is dangerous because, how do you bring back the trust? I don’t have an answer to that.
As deepfake technology continues to advance, there may come a time when distinguishing between authentic and fake content becomes increasingly difficult. How well do you think people currently recognize deep fakes (and how), and what strategies or tools can individuals use to detect such content before we reach a point where it becomes nearly impossible?
People say there are certain things you can distinguish as being AI generated – like the hands, or details in the eyes. So there are some elements, but that also depends on what tools people use. This is also the idea of “cheapfakes”, and you can still find some amateurish ones, but the tools are already very good. So “recognizing AI by these elements” – I think it is already outdated advice. You need to critically and contextually evaluate it – could be this true, could this person have said this, is this authentic? And then the question is – can you ask that from the society at large? Not everybody has that capacity, or maybe some do not want to do it. And things do happen when you get surprised by others, so something like ”this person would not have said this” is not necessarily 100% reliable. And on the flip side, the people can also start denying authentic context, authentic video evidence. Should we watermark or embed with something the authentic content to identify it? That is the thought process, but that’s the whole arms race.
The concept of the ‘reverse burden of proof’ you mentioned is quite bleak. What recommendations do you have for individuals and organizations to protect themselves from such situations where proving innocence can be challenging?
For organizations it is very important to have an investigation process. Of course some individuals may claim innocence when that’s not the case but there also might be people who are innocent and you need to be able to have a process which investigates and helps them. As individuals, I think it is very difficult to prevent these things – there is a lot of advice on what to look out for, like recognizing scams or abuse of your own information. And you may have full control of what you share online but there is also a lot of your data in government and company systems, and there are continuous breaches – which you have no control over. The only thing you can do is to be vigilant and see if existing accounts have been abused. You can change some information after a breach, which is considered temporary information, such as passwords, credit card numbers – but there is a lot of information that is permanent – your name, social security number, etc. Your options in that regard are rather limited. The only “saving grace” is that you are probably not that interesting because how much can you gain from one individual. Unless the attack is done at a large scale where you are just one of the bunch. But you have no control over that.
Scammers often exploit a sense of urgency, and with advancements in AI, real-time deepfakes have become increasingly convincing. To what extent do you believe the information we share on social media contributes to our vulnerability to scammers, and how can individuals better protect themselves in this context?
Psychological element is really deep when it comes to scams because that’s how they get you into the position of doing something you would not normally do. And the interesting thing is that, a lot of time people in retrospect say, “I thought there was something weird, something off”. So intuitively, not always but many times people know, but they don’t follow that intuition. The question is why. It can be urgency, but it can also be some other element of pressure, like a relationship. Sometimes there is a hierarchy, for example in a company, so you have an added element of pressure from someone higher up instructing you on what to do. So the people ignore that feeling that something is off. Scammers use psychological elements and emotions in different ways. If we look at dating scams, they tap into vulnerabilities, and they exploit it. ‘Pig butchering’, which is a combination between dating scam and investment scam, in a way came out of COVID-19. Corona didn’t introduce new threats – but it offered new targets. People came online a lot more looking for connection. And many were not resilient or they lowered their barriers. In terms of social media, it gives the scammers the context and a way in. They know things about you that they can use – so it enhances your vulnerability for scams. And I do think people share too much on social media, like location, interests, who their friends are. So, as the scammer, you have a lot of little hooks you can latch onto if you wanted to. And by latching onto those elements, you do create some sense of trust or you create interest from the other person. And people are not aware that someone can know all of this about them just from social media.
As a takeaway, you have said. “A culture where people are fooled en masse into believing something that isn’t real, reinforced by a video of something that never happened.” —If our identity has been compromised… How can we reclaim it?
I don’t think we can reclaim it. Sadly, something like that has a great psychological impact and I think it will make the person a very paranoid individual. Many years ago when I did the initial research on identity theft, there were many more problems in the United States because of the way credit works, and the fact there are all these different states. Credit reporting agencies provide the credit profile of the person when someone requests it. And you could request a credit freeze so that, when someone tries to start an account in your name, or get a new credit card – it should not happen because you had the credit freeze. But the system wasn’t working properly. So ultimately, we are not in full control – we can only reduce the damage. But we need the institutions as well. If we look at security – we can look at prevention, protection, incident response – but it is also what part of the budget and measures we reserve for victim remediation, which is often overlooked. And especially for individuals – there is so much attention now for the organizations and I understand that since they are the main target and there is much more money involved, but we should also help the individuals.