At the ETSI Security Conference 2024, we had the opportunity to speak with Dr. Galina Pildush from Palo Alto on the critical topic of child safety in the digital age. Dr. Galina shared her insights on the evolving challenges children face online and the importance of embedding cybersecurity education from a young age.
Can you share a bit about your background and how you got into child safety?
Dr. Pildush: I actually came into this field by accident. My background is in computer science, and I’ve always been interested in societal issues. I finished my PhD in 2009 at Stanford, and it was focused on technology adoption, particularly how engineers converge technologies. I didn’t specifically study psychology, but my mother is a child psychologist, so perhaps there’s something hereditary there. I’ve been teaching for years—first as an adjunct at Saint Europe University, then working with Cisco Systems and Juniper Networks. Eventually, I moved into a security architecture role at Palo Alto. I didn’t expect to end up here, but my work in security and societal impact led me to start thinking more about child safety online.
What triggered your shift toward focusing on child safety?
Dr. Pildush: It was really a moment of realization two years ago when I came across a tragic case that went viral. I didn’t go searching for it, but it was everywhere, and it really struck me. Last year, at MWC in Barcelona, I was approached by some magazine guys interested in child protection. It felt like an invisible hand was guiding me in this direction. The topic is sensitive, so it’s important not to scare people, but at the same time, we need to raise awareness. The question is: How do we make children safer online?
What’s your perspective on the solutions for child safety?
Dr. Pildush: The solution isn’t just about creating laws or standards. That’s not enough. Laws are often ineffective, and there will always be ways around them. Look at the dark web—why does it exist? It’s there because of privacy concerns. When it comes to children, banning something often has the opposite effect. It becomes something they want to explore. What needs to happen is for cyber self-protection to be embedded into the curriculum, just like English or geography. We need to teach kids “cyber care” from an early age. It’s a concept I truly believe in—embedding this awareness and safety into their education system. I believe it’s critical. If cyber care is embedded in the curriculum, it will have long-lasting effects. It’s easier to implement in private schools, but public schools need this too. The longer we procrastinate, the worse the problem will get. This kind of education needs to be integrated into our daily lives, and it needs to be done now, not later.
What are all of the dangers online for kids nowadays? Can any device be completely safe, like with parental controls? Or safe enough. What do you think?
Dr. Pildush: No. We’ve known about parental control for years. One of the major operators in Canada offers a service called “parental control.” I’ve been doing this for years. It involves a Secure Gateway Interface (SGI) interface where specific Access Control Systems (ACS) are registered not to allow access to sensitive websites like adult content. But that’s not enough. The provider must be ahead of the game because URLs change continuously. It’s very simple—you don’t even need to go to that URL to be redirected to a crazy place. You could search for something normal but then be redirected somewhere risky. And then there are proxy downloads where the provider can track your activity. So just parental controls are not enough. Also, if your child is using a device that’s protected, but their friends’ devices aren’t, they’ll still find a way to bypass everything. The solution has to be broader than just parental controls. The idea of Zero Trust is really the way forward. Regulation and standardization, and services like a sealed system, must protect all users—parents, children, teachers, software developers, and manufacturers. But awareness is key. A lot of people don’t understand the risks and ignore them, thinking it’s someone else’s problem. Again, education is crucial. We need to embed cyber awareness into the curriculum, teaching kids at a young age.
Do you think kids should start learning cybersecurity at an early age?
Dr. Pildush: Absolutely. In Australia, there’s a program for kids aged 5 to 15, which is great, but now I believe we should start even younger—maybe 3 years old. Children learn like sponges, and they’re already very comfortable with technology. They’re naturally graphical and intuitive with devices, so teaching them basic cyber hygiene should be just as normal as teaching them to wash their hands.
Can you tell us more about the Cyber Fit Nation program and how it extends to Cyber Safe Kids?
Dr. Pildush: Yes! The Cyber Fit Nation program was developed in collaboration with the Australian government, and it includes free resources on our website. There are lessons and activities for kids aged 5-15, broken down into different categories based on age. These are free to download and use. It covers basic cybersecurity principles like passwords, phishing, and safe online behaviour. These lessons are designed with the help of government officials, though I’m not sure if child psychologists were involved in the curriculum development.
You mentioned the concept of zero trust. How does it relate to child safety?
Dr. Pildush: Zero trust isn’t just a concept for IT architecture; it can apply to many aspects of life, including online safety for children. In the context of child protection, zero trust means building layers of defense, both psychological and technical, around children. Each layer needs to be robust, and we need to treat their online world with the same seriousness we apply to cybersecurity infrastructure.
How would this concept be implemented within the Metaverse?
Dr. Pildush: In the Metaverse, the concept of zero trust can be applied by using a layered approach. Each layer—such as the users, applications, infrastructure, platform, and ecosystem—needs to be secured individually. Researchers have split this into five layers: users (identifying who they are), applications (what they are doing), infrastructure (how personal data is protected), platforms (securing the platforms and data protection), and the ecosystem (monitoring data flows and encryption). It’s used by kids, adults, and people with disabilities. So, you can break it down into layers. For example, in layer 1, the users—who are they? Identifying the “who” is crucial. The same methodology applies here. You first define what you’re doing, then decide on the architecture. The next step is figuring out the flows and security policies, then securing the users, applications, infrastructure, platforms, and the ecosystem. Each layer needs to be secured with Zero Trust in mind.
If we refer to the Metaverse as a playground for cyberattacks, how do LLMs and generative AI play a role in these risks?
Dr. Pildush: AI, particularly generative AI, can be incredibly dangerous in terms of child safety. While AI itself is not new, generative AI takes it a step further by producing convincing content that can manipulate or deceive. For instance, AI can create emails or text messages from seemingly familiar sources, which makes it harder for children or even adults to spot potential threats. It also enables the creation of deepfake technology, which can impersonate voices or images, further increasing the risks. Kids, especially teenagers, may feel the pressure to appear mature or independent, and they might not always feel comfortable turning to their parents for help. This creates a vulnerable situation where AI-driven interactions could lead to exploitation. AI’s ability to mimic human interaction also raises concerns about children interacting with machines that they perceive as human, which can be a significant threat.
Can you explain an experience where you realized how easily people, even adults, can be tricked by machines or AI?
Dr. Pildush: One experience that stood out to me happened years ago at the GSMA and MWC in Barcelona. There was a demonstration of a robotic dog on an airport conveyor belt, and it was so realistic that I began interacting with it like it was a real dog. I was petting it, talking to it, and responding to its movements, completely caught up in the experience. Even though I knew it was a robot, the sense of care and affection it elicited from me felt real. This experience highlighted how easily a human mind can be tricked into believing that a machine, even a non-human object, can convey real emotions. This is a powerful example of how generative AI, especially in robotics, can deceive people—especially kids who may not have the same level of scepticism about machines.
With AI and generative AI evolving, what does this mean for child safety in the digital world?
Dr. Pildush: As AI and generative AI evolve, the potential risks for child safety in the digital world increase significantly. These technologies enable bad actors to create increasingly sophisticated attacks—whether it’s through phishing emails that seem to come from a trusted source or using deepfakes to impersonate someone familiar. Children, especially teens, are more likely to fall victim to such tactics because they are often less experienced and may feel pressured to act independently. In the digital age, where communication is so heavily mediated by technology, it’s essential to create awareness and incorporate proactive measures to protect kids from the growing risks that AI and generative AI present.
How can governments help raise awareness about online safety for kids?
Dr. Pildush: Governments play a huge role in raising awareness. Once cybersecurity education becomes part of the curriculum, parents will subconsciously get educated too. As a parent, you need to get involved in your child’s schooling. Even if you don’t understand the subject matter, being present and learning alongside your child is important. Once cybersecurity education is mandatory, it can be enforced through standards and regulations, creating a protective environment for kids.
What are the key takeaways for those working on child online safety?
Dr. Pildush: Collaboration is key. We need to keep coming together in conferences and discussions to share ideas and ask what’s next. Tools like LLMs and generative AI are becoming more prevalent, and with increased automation, we’ll need to keep up. Through these collaborations, we can push for the right education, regulation, and awareness. It’s a continuous process that requires everyone to be involved—parents, governments, educators, and tech companies.