UK police forces are “supercharging racism” through their use of automated “predictive policing” systems, as they are based on profiling people or groups before they have committed a crime, according to Amnesty International.
Predictive policing systems use artificial intelligence (AI) and algorithms to predict, profile or assess the likelihood of criminal behaviour, either in specific individuals or geographic locations.
In a 120-page report published on 20 February 2025 – titled Automated racism – How police data and algorithms code discrimination into policing – Amnesty said predictive policing tools are used to repeatedly target poor and racialised communities, as these groups have historically been “over-policed” and are therefore massively over-represented in police data sets.
This then creates a negative feedback loop, where these “so-called predictions” lead to further over-policing of certain groups and areas; reinforcing and exacerbating the pre-existing discrimination as increasing amounts of data are collected.
“Given that stop-and-search and intelligence data will contain bias against these communities and areas, it is highly likely that the predicted output will represent and repeat that same discrimination. Predicted outputs lead to further stop-and-search and criminal consequences, which will contribute to future predictions,” it said. “This is the feedback loop of discrimination.”
Amnesty found that across the UK, at least 33 police forces have deployed predictive policing tools, with 32 of these using geographic crime prediction systems compared to 11 that are using people-focused crime prediction tools.
It said these tools are “in flagrant breach” of the UK’s national and international human rights obligations because they are being used to racially profile people, undermine the presumption of innocence by targeting people before they’ve even been involved in a crime, and fuel indiscriminate mass surveillance of entire areas and communities.
The human rights group added the increasing use of these tools also creates a chilling effect, as people tend to avoid areas or people they know are being targeted by predictive policing, further undermining people’s right to association.
Examples of predictive policing tools cited in the report include the Metropolitan Police’s “gangs violence matrix”, which was used to assign “risk scores” to individuals before it was gutted by the force over its racist impacts; and Greater Manchester Police’s XCalibre database, which has similarly been used to profile people based on the “perception” that they are involved in gang activity without any evidence of actual offending themselves.
Amnesty also highlighted Essex Police’s Knife Crime and Violence Model’s, which uses data on “associates” to criminalise people by association with others and uses mental health problems or drug use as markers for criminality; and West Midlands Police’s “hotspot” policing tools, which the force itself has admitted is used for error-prone predictive crime mapping that is wrong 80% of the time.
“The use of predictive policing tools violates human rights. The evidence that this technology keeps us safe just isn’t there, the evidence that it violates our fundamental rights is clear as day. We are all much more than computer-generated risk scores,” said Sacha Deshmukh, chief executive at Amnesty International UK, adding these systems are deciding who is a criminal based “purely” on the colour of their skin or their socio-economic background.
“These tools to ‘predict crime’ harm us all by treating entire communities as potential criminals, making society more racist and unfair. The UK government must prohibit the use of these technologies across England and Wales as should the devolved governments in Scotland and Northern Ireland.”
He added that the people and communities subject to this automated profiling have a right to know about how the tools are being used, and must have meaningful routes of redress to challenge any policing decisions made using them.
On top of a prohibition on such systems, Amnesty is also calling for greater transparency around the use of data-driven systems by police that are in use, including a publicly accessible register with details of the tools, as well as accountability obligations that include a right and clear forum to challenge police profiling and automated decision-making.
In an interview with Amnesty, Daragh Murray – a senior lecturer at Queen Mary University London School of Law who co-wrote the first independent report on the Met Police’s use of live facial-recognition (LFR) technology in 2019 – said because these systems are based on correlation rather than causation, they are particularly harmful and inaccurate when used to target individuals.
“Essentially you’re stereotyping people, and you’re mainstreaming stereotyping, you’re giving a scientific objective to stereotyping,” he said.
NPCC responds to Amnesty
Computer Weekly contacted the Home Office about the Amnesty report but received no on the record response. Computer Weekly also contacted the National Police Chief’s Council (NPCC), which leads on the use of AI and algorithms by UK police.
“Policing uses a wide range of data to help inform its response to tackling and preventing crime, maximising the use of finite resources. As the public would expect, this can include concentrating resources in areas with the most reported crime,” said an NPCC spokesperson.
“Hotspot policing and visible targeted patrols are the bedrock of community policing, and effective deterrents in detecting and preventing anti-social behaviour and serious violent crime, as well as improving feelings of safety.”
They added that the NPCC is working to improve the quality and consistency of its data to better inform its response, ensuring that all information and new technology is held and developed lawfully, ethically in line with the Data Ethics Authorised Professional Practice (APP).
“It is our responsibility as leaders to ensure that we balance tackling crime with building trust and confidence in our communities whilst recognising the detrimental impact that tools such as stop and search can have, particularly on black people,” they said.
“The Police Race Action Plan is the most significant commitment ever by policing in England and Wales to tackle racial bias in its policies and practices, including an ‘explain or reform’ approach to any disproportionality in police powers.
“The national plan is working with local forces and driving improvements in a broad range of police powers, from stop and search and the use of Taser through to officer deployments and road traffic stops. The plan also contains a specific action around data ethics, which has directly informed the consultation and equality impact assessment for the new APP.”
Ongoing concerns
Problems with predictive policing have been highlighted to UK and European authorities using the tools for a number of years.
In July 2024, for example, a coalition of civil society groups called on the then-incoming Labour government to place an outright ban on both predictive policing and biometric surveillance in the UK, on the basis they are disproportionately used to target racialised, working class and migrant communities.
In the European Union (EU), the bloc’s AI Act has banned the use predictive policing systems that can be used to target individuals for profiling or risk assessments, but the ban is only partial as it does not extend to place-based predictive policing tools.
According to a 161-page report published in April 2022 by two MEPs jointly in charge of overseeing and amending the AI Act, “predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination. It is therefore inserted among the prohibited practices.”
According to Griff Ferris, then-legal and policy officer at non-governmental organisation Fair Trials, “time and time again, we’ve seen how the use of these systems exacerbates and reinforces discriminatory police and criminal justice action, feeds systemic inequality in society, and ultimately destroys people’s lives. However, the ban must also extend to include predictive policing systems that target areas or locations, that have the same effect.”
A month before in March 2022, Fair Trials, European Data Rights (EDRi) and 43 other civil society organisations collectively called on European lawmakers to ban AI-powered predictive policing systems, arguing that they disproportionately target the most marginalised people in society, infringe fundamental rights and reinforce structural discrimination.
That same moth, following its formal inquiry into the use of algorithmic tools by UK police – including facial recognition and various crime “prediction” tools – the Lords Home Affairs and Justice Committee (HAJC) described the situation as “a new Wild West” characterised by a lack of strategy, accountability and transparency from the top down. It said an overhaul of how police deploy AI and algorithmic technologies is required to prevent further abuse.
In the case of “predictive policing” technologies, the HAJC noted their tendency to produce a “vicious circle” and “entrench pre-existing patterns of discrimination” because they direct police patrols to low-income, already over-policed areas based on historic arrest data.
“Due to increased police presence, it is likely that a higher proportion of the crimes committed in those areas will be detected than in those areas which are not over-policed. The data will reflect this increased detection rate as an increased crime rate, which will be fed into the tool and embed itself into the next set of predictions,” it said.
However, in July 2022, the UK government has largely rejected the findings and recommendations of the Lords inquiry, claiming there is already “a comprehensive network of checks and balances”.
The government said at the time while MPs set the legal framework providing police with their powers and duties, it is then for the police themselves to determine how best to use new technologies such as AI and predictive modelling to protect the public.