MPs propose ban on predictive policing
Predictive policing technologies infringe human rights “at their heart” and should be prohibited in the UK, argues Green MP Siân Berry, after tabling an amendment to the government’s forthcoming Crime and Policing Bill.
Speaking in the House of Commons during the report stage of the bill, Berry highlighted the dangers of using predictive policing technologies to assess the likelihood of individuals or groups committing criminal offences in the future.
“Such technologies, however cleverly sold, will always need to be built on existing, flawed police data, or data from other flawed and biased public and private sources,” she said. “That means that communities that have historically been over-policed will be more likely to be identified as being ‘at risk’ of future criminal behaviour.”
Berry’s amendment (NC30 in the amendment paper) – which has been sponsored by eight other MPs, including Zarah Sultana, Ellie Chowns, Richard Burgon and Clive Lewis – would specifically prohibit the use of automated decision-making (ADM), profiling and artificial intelligence (AI) for the purpose of making risk assessments about the likelihood of groups or people committing criminal offences.
It would also prohibit the use of certain information by UK police to “predict” people’s behaviour: “Police forces in England and Wales shall be prohibited from… Predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons.”
Speaking in the Commons, Berry further argued: “As I have always said in the context of facial recognition, questions of accuracy and bias are not the only reason to be against these technologies. At their heart, they infringe human rights, including the right to privacy and the right to be presumed innocent.”
While authorities deploying predictive policing tools say they can be used to more efficiently direct resources, critics have long argued that, in practice, these systems are used to repeatedly target poor and racialised communities, as these groups have historically been “over-policed” and are therefore over-represented in police datasets.
This then creates a negative feedback loop, where these so-called “predictions” lead to further over-policing of certain groups and areas, thereby reinforcing and exacerbating the pre-existing discrimination as increasing amounts of data are collected.
Tracing the historical proliferation of predictive policing systems in their 2018 book Police: A field guide, authors David Correia and Tyler Wall argue that such tools provide “seemingly objective data” for law enforcement authorities to continue engaging in discriminatory policing practices, “but in a manner that appears free from racial profiling”.
They added it therefore “shouldn’t be a surprise that predictive policing locates the violence of the future in the poor of the present”.
As a result of such concerns, there have been numerous calls in recent months from civil society for the UK government to ban the use of predictive policing tools.
In February 2025, for example, Amnesty International published a 120-page report on how predictive policing systems are “supercharging racism” in the UK by using historically biased data to further target poor and racialised communities.
It found that across the UK, at least 33 police forces have deployed predictive policing tools, with 32 of these using geographic crime prediction systems compared to 11 that are using people-focused crime prediction tools.
Amnesty added these tools are “in flagrant breach” of the UK’s national and international human rights obligations because they are being used to racially profile people, undermine the presumption of innocence by targeting people before they’ve even been involved in a crime, and fuel indiscriminate mass surveillance of entire areas and communities.
More than 30 civil society organisations – including Big Brother Watch, Amnesty, Open Rights Group, Inquest, Public Law Project and Statewatch – also signed an open letter in March 2025 raising concerns about how the Data Use and Access Bill, which is now an Act, will remove safeguards against the use of automated decision-making by police.
“Currently, sections 49 and 50 of the Data Protection Act 2018 prohibit solely automated decisions from being made in the law enforcement context unless the decision is required or authorised by law,” they wrote in the letter, adding that the new Clause 80 would reverse this safeguard by permitting solely automated decision-making in all scenarios where special category data isn’t being used.
“In practice, this means that automated decisions about people could be made in the law enforcement context on the basis of their socioeconomic status, regional or postcode data, inferred emotions, or even regional accents. This greatly expands the possibilities for bias, discrimination, and lack of transparency.”
The groups added that non-special category data can be used as a “proxy” for protected characterises, giving the example of how postcodes can be used as a proxy to potentially infer someone’s race.
They also highlighted how, according to the government’s own impact assessment for the law, “those with protected characteristics such as race, gender and age, are more likely to face discrimination from ADM due to historical biases in datasets”.
The letter was also signed by a number of academics, including Brent Mittelstadt and Sandra Wachter from the Oxford Internet Institute, and social anthropologist Toyin Agbetu from University College London.
A separate amendment (NC22) introduced by Berry attempts to alleviate these data issues by introducing new safeguards for automated decisions in a law enforcement context, which would include providing meaningful redress, greater transparency around police use of algorithms, and ensuring that people can request human involvement in any police decisions about them.
In April 2025, Statewatch also separately called for the Ministry of Justice (MoJ) to halt its development of crime prediction tools, after obtaining documents via a Freedom of Information (FoI) campaign that revealed that the department is already using one flawed algorithm to “predict” people’s risk of reoffending, and is actively developing another system to “predict” who will commit murder.
“The Ministry of Justice’s attempt to build this murder prediction system is the latest chilling and dystopian example of the government’s intent to develop so-called crime ‘prediction’ systems,” said Statewatch researcher Sofia Lyall.
“Like other systems of its kind, it will code in bias towards racialised and low-income communities. Building an automated tool to profile people as violent criminals is deeply wrong, and using such sensitive data on mental health, addiction and disability is highly intrusive and alarming.”
She added: “Instead of throwing money towards developing dodgy and racist AI and algorithms, the government must invest in genuinely supportive welfare services. Making welfare cuts while investing in techno-solutionist ‘quick fixes’ will only further undermine people’s safety and well-being.”
Prior to this, a coalition of civil society groups called on the then-incoming Labour government in July 2024 to place an outright ban on both predictive policing and biometric surveillance in the UK, on the basis they are disproportionately used to target racialised, working class and migrant communities.
A March 2022 House of Lords inquiry into the use of advanced algorithmic technologies by UK police has also previously identified major concerns around the use of crime prediction systems, highlighting their tendency to produce a “vicious circle” and “entrench pre-existing patterns of discrimination” because they direct police patrols to low-income, already over-policed areas based on historic arrest data.
Lords found that, generally, UK police are deploying algorithmic technologies – including AI and facial recognition – without a thorough examination of their efficacy or outcomes, and are essentially “making it up as they go along”.
Source link