At the ETSI Security Conference 2025, we spoke with David Rogers, founder of Copper Horse, about the evolving landscape of AI security. Rogers shared insights on the Trusted AI Bill of Materials (TAIBOM), the challenges of securing machine learning and AI systems, the risks of data tampering, and the importance of human oversight in AI-driven decision-making.
David, could you start by telling us a bit about yourself and Copper Horse and its focus areas in AI, IoT, and security?
My name is David Rogers, and I set up Copper Horse in 2011. I originally came from the mobile industry and wanted to do more interesting work. That’s been our philosophy all the way through—we turn away probably more work than we take. We only accept interesting projects, and they must have some societal value. We have excellent employees and consultants who share that philosophy.
Over the years, we’ve worked on some really good things. One, related to ETSI, was the Code of Practice on IoT security, which became EN 303 645. That’s been incredibly successful and was a great project to be part of.
More recently, like many other companies, there’s been a massive industry shift toward AI. We’ve been looking at how to make the security better. Many of the same security principles apply, but they’re not being used, and there’s a massive rush toward deploying things that can be very harmful, in my view. We can solve that with new thinking, but also with traditional good security practices.
It’s good to see the ETSI AI security work emerging—I think that’s really important. We’re proactively doing things like security testing of LLMs, developing new techniques for attacking AI, and conducting research around data and model protection. Because of this work, we’ve had to build some models ourselves, which led to some interesting projects.
One I particularly like, and am working on personally, is a model that can read 17th-century shorthand. It’s quite a niche area, but there are tens of thousands of pages of shorthand that are still unread—written by historical figures such as US presidents, politicians, and kings. It started with a security origin but has evolved into something positive, helping historical researchers.
The humanities community is often wary of AI because their interaction with it tends to be through inaccurate or sensationalized content. We’re exploring how to make AI more trustworthy in that space—for example, protecting datasets. If a dataset came from the Bodleian Library in Oxford and was digitally signed, I’d trust it far more than one I found online without provenance. We trust institutions like the Bodleian or ETSI because we know they have strong controls.
That ability to test trust is important when evaluating AI outputs. If something was trained on trusted, verifiable data, the results are better. We’re also looking at protecting the integrity of archives because altering data is a real threat. There have been attacks where training data is modified to produce malicious outputs, so that’s a major area of focus for us.
Your presentation title is quite provocative. What inspired you to address the urgent need for AI security in such a way? In your view, what are the most pressing cybersecurity risks associated with the rapid deployment of AI technologies?
I’m just telling the harsh truth: people have already died.
This is a sociotechnical problem. One issue with chatbots is anthropomorphism—humans ascribe personalities to things that don’t have them. There was a tragic case involving a chatbot called Character.AI, where someone took their own life after interacting with it.
There’s a pessimistic side to this, which we can’t tolerate—cases where people are harmed or killed, like AI systems telling people to drink bleach. The guardrails aren’t good enough to catch those things.
But there’s also a less pessimistic view. Some people may have been helped—perhaps even saved—by having a chatbot to talk to when they had no one else. We can’t measure those positives; we only see the negatives.
Still, harm is intolerable. Another example I referenced was about relationship breakdowns. Some chatbots mirror user input—whatever opinion a user has is echoed back as affirmation. It doesn’t tell the truth, just amplifies what’s said, often leading to a spiral of negativity or harm.
Then there’s “agentic AI,” where companies are trying to replace humans in decision-making systems. We’re seeing it in the industrial control space and telecoms sector. That’s dangerous. You may want creativity and non-determinism for art or image generation, but in safety-critical systems, you need deterministic responses—and LLMs don’t provide that.
We’ve seen attacks on automotive systems, for instance, where models behind ADAS (Advanced Driver Assistance Systems) can be manipulated—making a car misread a speed limit sign or ignore it entirely. These systems often have no protection. That’s a solvable problem with standard security measures.
Going forward, we need to protect this new technology as well. One approach is what we call the Trustable AI Bill of Materials (TAIBOM)—digitally signing everything so we can verify if something’s been modified. I also spoke in my ETSI conference talk about using “assertions”, a concept from Alan Turing in 1949, where software asserts knowledge it knows to be true. For example, if a standard has five requirements and an LLM refers to requirement seven, you can flag that as false. Techniques like these could help make AI systems safer.
Ultimately, we need to act now. We can’t just allow this to continue. There could be large-scale failures—like a water treatment system being controlled by an unvetted AI model. That kind of decision could cause real human harm, just through corporate negligence.
You mentioned the Trusted AI Bill of Materials. Could you explain what TAIBOM is, why it was created, and its relevance for AI security today? How has Copper Horse contributed to the initiative?
TAIBOM started as an Innovate UK research project with partners like the British Standards Institution, TechWorks, and a few others, including us. The goal was to prove the concept worked—to show it actually defends against security attacks.
We did those attacks as part of the project, and all the material is publicly available at taibom.org, including our security results. The TAIBOM implementation is open source, so anyone can use it. We’re continuing to develop it and hope it will undergo formal standardization.
Bills of Materials are becoming increasingly important—software, hardware, and now cryptographic. But right now, the standards space is fragmented. I’d like to see harmonization across all of these. The faster that happens, the faster we’ll move toward safer systems.
It was a pure research project, but a very successful one. You really understand the problems and benefits only when you get your hands dirty with the engineering.
From your perspective, how effective have initiatives like TAIBOM been in helping organizations build trust in AI systems, and what metrics or indicators are used to assess this?
It’ll take a long time. Trust in AI is very low right now, and understandably so. We’re also dealing with nation-state propaganda, misinformation, and what people are calling “AI slop”—junk content flooding the internet, even in children’s videos. It’s psychologically harmful and adds to the sense that we’re living in a dystopian sci-fi novel.
Governments are making statements, but the market still dictates what consumers get. That won’t change immediately. I’d like to think we can move past the hype and get to a stage where the “adults in the room” are building secure, resilient systems.
A lot of what’s called “AI” isn’t even AI—it’s machine learning or just software engineering. The hype around artificial general intelligence and sentience is a distraction. People believe machines are sentient because of how they respond, but we’re still human, with all our flaws.
How can organizations secure AI systems, especially when they involve complex third-party components or open-source libraries?
You can still use an LLM trained on your own data, but how you use it must be tightly controlled. Expert supervision is key.
I heard from someone managing coders that junior developers struggle with generative coding tools, while senior coders find them helpful. Experienced developers can identify useful output; juniors can’t always tell what’s nonsense.
In my shorthand model, for instance, the AI acts as my sidekick—it suggests possibilities, but I make the final decisions. If the AI outputs a sentence with an unknown word, I can use multiple methods to fill it in—my own reasoning, or natural language processing suggestions—but the choice is still mine.
That’s the principle: don’t allow unrestrained decision-making by AI. Machines shouldn’t control other machines without human oversight.
We also have to address the gap between the real world and its digital representation. For example, in a fast-food restaurant, staff may press “order delivered” before the food’s ready – the reason they do this is to meet targets; they’re gaming the system. The digital system shows something that isn’t true. Sensors can fail or provide inaccurate readings. Those mismatches are constant challenges, especially when automation is layered on top.
How do you see security practices and policies evolving in the next few years?
There will be a lot of new rules, but I think some of the major tech companies will continue to ignore them—even in the face of real harm. They have the legal power to do so, and governments face a huge challenge. Citizens are the victims of this AI “slop.”
So, in the short term, I’m pessimistic. But in the longer term, I hope we’ll reach a better place—with useful, trustworthy technology.
