The government’s AI push needs clear accountability

The government’s AI push needs clear accountability

However, there is a big elephant in the room. Without clear accountability frameworks, this 50-point roadmap risks becoming a cautionary tale rather than a success story. When an AI system hallucinates, exhibits bias or suffers a security breach, who takes responsibility? Right now, the answer is often ‘it depends’, and that uncertainty is innovation’s biggest threat. 

Indeed, having worked across government, education and commercial sectors for over two decades, I’ve seen how accountability gaps can derail even the most well-intentioned digital programmes. The government’s AI push won’t be different unless we get serious about establishing clear lines of responsibility from procurement through to deployment. 

Why procurement transparency isn’t optional 

Too often, procurement teams are committing to AI tools without understanding what data models they are trained on, how decisions are made or whether AI is even the right solution for them. 

IT providers’ opacity plays a significant role here. Many suppliers treat training data and algorithms as proprietary secrets, offering only high-level descriptions instead of meaningful transparency. Meanwhile, procurement staff often aren’t trained to evaluate AI-specific risks, so critical questions about bias or explainability simply don’t get asked. 

Political pressure to deliver an “AI solution” quickly can override proper due diligence. AI has become such a marker of innovation that it can sometimes railroad basic common sense – instead, we need to take a step back and ask whether this is actually the right tool for the job. 

When decisions involve multiple departments and no one person is fully accountable for validating the AI’s technical foundations, gaps become inevitable. Buyers need to get hands-on with tools before implementing them and use benchmarking tools that can measure bias. If suppliers show hesitancy about transparency, buyers should walk away. 

Designing accountability from day one 

So, what does meaningful supplier accountability look like in practice? It starts with contracts that include line-by-line responsibility for every decision an AI system makes. 

Suppliers should provide fully transparent decision flows and explain their reasoning for specific outputs, what data they used and why. Buyers should then be able to speak with reference clients who have already implemented similar AI-based systems. Most importantly, suppliers need to demonstrate how their systems can be traced, audited and explained when things go wrong. 

I favour a GDPR-style approach to allocating responsibility, one that is linked to control. If suppliers insist on selling black boxes with minimal transparency, they should accept the majority of risk. On flipside, the more transparency, configurability and control they give buyers, the more they can share that risk. 

For instance, if a supplier releases a new model trained on a dataset that severely shifts bias, that is on them, but if a buyer purchases a RAG-based tool and accidentally introduces sensitive data, the responsibility lies with the buyer. Contracts need to clearly identify each possible failure scenario, assign accountability and spell out consequences. 

To avoid the fate of Amazon drones and driverless cars – i.e. technologies that exist but remain stuck in legal limbo due to unclear responsibility chains – public sector AI projects should be designed with human oversight from the start. There should always be someone to spot-check outputs and decisions, with high initial thresholds that gradually relax as systems prove their accuracy consistently. 

The key is avoiding situations where too many parties create grey areas of responsibility. Legal professionals have spent years blocking progress on autonomous vehicles and delivery drones precisely because the liability questions remain unanswered. We can’t let AI follow the same path. 

The insurance reality check 

And what about the insurance sector’s place in all of this? The blunt truth, at least at the moment, is that insurers are nowhere near ready for AI-specific risks, and that’s a massive problem for public sector adoption. 

Insurers price risk based on historical loss data, but AI is evolving so rapidly that there’s virtually no precedent for claims involving model drift, bias-induced harm or systemic hallucination errors. In AI deployments involving multiple parties, underwriters struggle to assess exposure without crystal-clear contractual risk allocation. 

Technical opacity compounds the problem. Underwriters rarely get sufficient insight into how models work or what data they are trained on, which makes it almost impossible to quantify risks around bias or prompt injection attacks. 

Regulatory uncertainty adds another layer of complexity. The EU AI Act, the UK’s pro-innovation approach and sector-specific regulations are all in flux, and this is making it difficult for insurers to set consistent terms and for buyers to know what coverage they need. 

The proliferation of AI frameworks and policies is encouraging but without enforcement mechanisms, they risk becoming nothing more than expensive paperwork. We need to embed accountability into all government standards to make them an enabler rather than a blocker. The government’s AI Opportunities Action Plan is technically achievable, but only if we build clear accountability measures from the start as opposed to treating them as an afterthought. 

Alastair Williamson-Pound is Chief Technology Officer at Mercator Digital, with over 20 years’ experience across government, education and commercial sectors. He has led major programmes for HMRC, GDS and Central Government.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.