Last month, the UK government announced plans to “mainlineAI into the veins” of the nation and “revolutionise how AI is used in the public sector.” Despite this very public commitment, government departments have been laying the groundwork of this adoption for years, experimenting with algorithmic tools behind closed doors.
This spectre of AI pulling the strings on decisions about our health, welfare, education and justice without our knowledge or scrutiny is a Kafkaesque nightmare. Only now are we starting to get a picture of how they are being used.
Since February 2024, the Department for Science, Innovation and Technology has required all central government departments to publish clear information about their use of algorithmic tools on the Algorithmic Transparency Recording Standard (ATRS) Hub. However, so far only 47 records have been made public by various government departments – over half of which were published since the start of this year.
This insouciance towards transparency is particularly alarming, given reports that AI pilots intended for the welfare system are being quietly shelved due to “frustrations and false starts.”
The recent additions to ATRS reveal that the government is using algorithmic tools to influence critical decisions, including which benefits claimants qualify for employment and support allowance (ESA), which schoolchildren are at risk of becoming ‘NEET’ (not in education, employment, or training), and the sentences and licence conditions that should be given to offenders.
With so little information available it is worth asking: how many government departments are secretly using algorithms to make decisions about our lives?
At the same time as it is pushing the mass adoption of AI in the public sector, the government is pushing through legislation that would weaken existing protections against automated decision-making (ADM).
The UK General Data Protection Regulation (GDPR) currently prohibits any solely automated process making significant decisions. This protects us from “computer says no” scenarios where we face adverse outcomes without any real understanding of the reasoning behind them. The Data Use and Access Bill (DUAB) currently progressing through the House of Commons would remove this protection from a vast swathe of decision-making processes, leaving us exposed to discrimination, bias and error without any recourse to challenge it.
The Bill would allow solely automated decision-making, provided it does not process ‘special category data.’ This particularly sensitive sub-category of personal data includes biometric and genetic data; data concerning a person’s health, sex life or sexual orientation and data which reveals racial or ethnic origin; political religious or philosophical beliefs; and trade union membership.
Whilst stringent protections for these special categories of data are sensible, automated decisions using non-special category data can still produce harmful and discriminatory outcomes.
For example, the Dutch childcare benefits scandal involved the use of a self-learning algorithm which disproportionately flagged low-income and ethnic minorities families as fraud risks despite not processing special category data. The scandal pushed thousands of people into poverty after they were wrongfully investigated and forced to pay back debts they did not owe; the anxiety of the situation caused relationships to break down and even led people to take their own lives.
Closer to home, the A-level grading scandal during the COVID pandemic produced unequal outcomes between privately educated and state-school students and provoked public outrage despite the grading system not relying on the processing of special category data.
Non-special category data can also act as a proxy for special category data or protected characteristics. For instance, the Durham Constabulary’s now-defunct Harm Assessment Risk Tool (HART) assessed the recidivism risk of offenders by processing 34 categories of data, including two types of residential postcode. The use of postcode data in predictive software risked embedding existing biases of over-policing in areas of socio-economic deprivation. Stripping away the few safeguards we currently have makes the risk of another Horizon-style catastrophe even greater.
Importantly, a decision is not considered to be automated where there is meaningful human involvement. In practice, this might look like a HR department reviewing the decisions of an AI hiring tool before deciding who to interview or a bank using an automated credit searching tool as one factor when deciding whether to grant a loan to an applicant. These decisions do not attract the protections which apply to solely ADM.
The public sector currently circumvents some of the prohibitions on ADM by pointing to human input in the decision-making process. However, the mere existence of a human-in-the-loop does not necessarily equate to ‘meaningful’ involvement.
For instance, the Department for Work and Pensions (DWP) states that after its ESA Online Medical Matching Tool offers a matching profile an “agent performs a case review” to ultimately decide whether a claim should be awarded.
However, the department’s risk assessment also acknowledges that the tool could reduce the meaningfulness of a human agent’s decision if they simply accept the algorithmic suggestion. This ‘automation bias’ means that many automated decisions which have superficial human involvement that amounts to no more than the rubber-stamping of a machine’s logic are likely to proliferate in the public sector – without attracting any of the protections against solely ADM.
The question of what is meaningful human involvement is necessarily context dependent. Amsterdam’s Court of Appeal found that Uber’s decision to “robo-fire” drivers did not involve meaningful human input, as the drivers were not allowed to appeal and the Uber employees who took the decision did not necessarily have the level of knowledge to meaningfully shape the outcome beyond the machine’s suggestion.
Evidently, one man’s definition of meaningful is different from another’s. The DUAB gives the secretary of state for Science, Innovation and Technology expansive powers to redefine what this might look like in practice. This puts us all at risk of being subjected to automated decisions which are superficially approved by humans without the time, training, qualifications or understanding to be able to meaningfully provide input.
The jubilant embrace of AI by the UK government may be a sign of the times, but the unchecked proliferation of automated decision-making through the public sector and weakening of related protections is a danger to us all.