Sarah Armstrong-Smith brings rare front-line authority to the cyber resilience conversation, with a career shaped by some of the most defining digital threats of the modern era. From the Millennium Bug through to board-level cyber strategy at Microsoft and the London Stock Exchange Group, her perspective is grounded in real crisis leadership, not theory.
That depth is exactly why she is such a sought-after cybersecurity speaker. As former Chief Security Advisor for Microsoft EMEA, a UK Government Cyber Advisory Board member and author of Understand the Cyber Attacker Mindset, Armstrong-Smith is known for translating complex threat landscapes into leadership decisions that organisations can actually act on.
In this exclusive interview with the Champions Speakers Agency for the IT Security Guru, Sarah Armstrong-Smith explores how image-based AI is reshaping the threat landscape, where organisations continue to underestimate cyber risk, and what leaders, companies and individuals must now do to rebuild trust. Her insight blends strategic clarity with lived experience at the sharp end of crisis management, making this a timely conversation for any organisation navigating AI-era risk.
Image-based AI tools introduce new cyber risks around impersonation, harassment and deepfake abuse. How does this shift the threat landscape for individuals who may not traditionally see themselves as cyber targets?
Sarah Armstrong-Smith: Image based AI tools have lowered the barrier for impersonation, harassment and deepfake abuse. Individuals who never considered themselves potential targets, now find their likeness manipulated or weaponised with minimal effort. The threat landscape has democratised in a way that feels deeply personal.
Once a system is exposed to the public, malicious actors will test its limits immediately by attempting to break existing safety rails and identify where some don’t even exist. The difference today is that image-based tools can cause reputational, emotional and even financial harm at scale, often before a victim is even aware.
For individuals, the key message is that cyber risk is no longer confined to passwords and phishing emails. Your face, voice and online presence have become part of your attack surface, whether you intended it or not.
Many users assume privacy risks only apply when data is explicitly uploaded. What are the less obvious ways people may be exposing themselves through interaction with AI-driven platforms?
Sarah Armstrong-Smith: Many users assume risk only arises when they upload personal data. AI platforms infer far more than people realise from behavioural patterns to emotional cues, location data, relationship dynamics and even identity traits. Every interaction becomes a data point, which can be cross-referenced across other platforms.
Images are particularly revealing, not just in terms of the actual person. Background objects, reflections, clothing logos, metadata and even shadows can disclose information unintentionally. AI systems can extract meaning from details humans overlook.
Today’s platforms operate at a far more sophisticated level, meaning users may be revealing more about themselves through casual engagement than they ever intended.
For organisations experimenting with generative AI, where do you most often see cyber security and privacy risk underestimated, particularly when tools are deployed quickly or informally?
Sarah Armstrong-Smith: The biggest gap is the assumption that generative AI tools are ‘just productivity enhancers’ rather than huge data processing systems with security implications. Organisations often deploy them informally through pilots, shadow IT or experimentation, without considering data governance, model behaviour or regulatory obligations.
There is a tendency to underestimate how models retain, infer or reproduce sensitive information. Without strict controls, confidential data can leak into prompts, outputs or training pipelines.
AI models are already exposing large gaps and vulnerabilities in existing processes, but organisations are either not fully aware, or not incentivised enough to close them, due to the concerns of being left behind, or not being competitive enough in an open market.
Experimentation without guardrails can create unintended consequences. Organisations adopting generative AI today must treat it as a security capability, not a novelty.
Looking ahead, what practical lessons should technology leaders, organisations and individuals take from incidents like Grok if they want to rebuild trust and reduce AI-related harm?
Sarah Armstrong-Smith: The first lesson is integrity. AI systems, no matter how advanced, are not fully understood and unpredictable, and the public expects companies to acknowledge this. Upholding accountability and transparency, without limitations is essential for rebuilding trust.
The second lesson is that safety must be designed in, not bolted on. Reactive fixes when the pressure starts to build are not enough; responsible and reliable AI requires anticipating misuse, adversarial behaviour and societal impact before deployment. Grok’s experience reinforces this point at a much larger scale.
Finally, leaders must recognise that trust is cumulative. Every incident, and how companies choose to respond shapes public perception of the entire industry. Companies that prioritise responsible innovation, and doing the right thing from the outset will be the ones that maintain credibility.
Guidance for companies embedding AI
Treat deployment as a safety and security imperative, not a product decision. Most incidents and failures happen after release, not during development. Companies should conduct adversarial red teaming, stress test models in realistic environments, apply strict content filters and monitoring, and establish kill switches and rollback plans.
Minimise data exposure by design. Adopt data minimisation, clear boundaries on what is stored or used for training, tiered access controls and privacy preserving architectures.
Responsible and reliable AI isn’t a governance; it requires continuous oversight as models continue to build and grown in functionality capability. That means regular audits, monitoring for drift, incident reporting mechanisms, clear accountability at board level to proactively and publicly address failures.
Guidance for individuals worried about image misuse or privacy abuse
The simplest point of reference is to assume that anything uploaded can be copied, altered or inferred upon. Even if a platform claims not to train on your data, images can still be screenshotted, scraped, used for impersonation or used to infer location, habits or relationships.
In today’s digital environment it can sound counterintuitive to tell individuals to limit public posting, remove metadata, avoid identifiable backgrounds and use platform privacy settings aggressively. Small changes dramatically can dramatically reduce exposure, but it puts the onus on the individuals and limits their ability to enjoy and use social and AI platforms. And importantly, know your rights when using different platforms. For example, under many data protection laws, you can request deletion, challenge automated processing and object to your data being used for training
This is why its so important for the service providers to help bridge the gap with implementing and enforcing safety and security protocols. This can also include protective technologies such as watermarking, adversarial filters, reverse image monitoring and identity protection services.

