Facial recognition endangers us all without a clear legal basis


On 27 January 2024, the Justice and Home Affairs Committee in the House of Lords became the latest voice to question the legality of police use of live facial recognition technology in England and Wales – and be ignored.

Ignored by the police in England and Wales, who continue to use a technology which “compares a live camera video feed of faces against a predetermined watchlist of people to find a possible match”.

Ignored by the government, which alone has the power to order a moratorium on this technology until Parliament passes legislation creating a clear foundation in law for its use and a legislative framework for its regulation.

And ignored by the public, whose privacy, civil liberties and human rights have been eroding for years as their government and police subject them to a technology considered so high-risk, the European Union (EU) has largely banned its use by police, as have several US cities, counties and states.

Under the EU AI Act, the use of biometric identification systems by law enforcement is prohibited “except in exhaustively listed and narrowly defined situations” such as searching for missing persons or preventing a terrorist attack, which would require a judge’s authorisation. Even the retrospective use of this technology by police requires a judge’s sign-off. The use of AI applications such “untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases” is also banned.

All of which makes the British position baffling, embarrassing, and damaging.

Baffling, because when we hosted the world’s first AI Safety Summit six months ago, prime minister Rishi Sunak announced that he would not “rush to regulate” AI because, “How can we write laws that make sense for something we don’t yet fully understand?” His EU and US counterparts refrained from pointing out that they have managed it, easily.

Embarrassing, because our prime minister is either ignorant of the excellent and abundant research on the risks of live facial recognition technology, or chooses to ignore it.

For how else could he and his advisers be unaware that in October 2023 MPs and peers called for an “immediate stop” to live facial recognition surveillance? Or that in 2019 the Science and Technology Committee in the House of Commons called for a moratorium on facial recognition technology until Parliament passes new legislation? Or that Lord Clement-Jones in the House of Lords has sought a private member’s bill for the same? Or that the independent review by Matthew Ryder KC and the Ada Lovelace Institute’s reports in 2022 both warned of the risks of facial recognition technology? Or that Ed Bridges and Liberty brought a legal challenge to the South Wales Police use of facial recognition technology in 2020 – and won?

Damaging, because the UK operates a model of “policing by consent”. Yet the public has never consented to the use of live facial recognition technology. Our elected representatives have never voted on it. On the contrary, the police use this technology on us without our consent and often without our knowledge.

Baroness Hamwee, chair of the Justice and Home Affairs Committee in the House of Lords, is clear as to why this is unacceptable: “Current regulation is not sufficient. Oversight is inadequate [….] We are an outlier as a democratic state in the speed at which we are applying this technology.”

To ignore all of this risks more than misidentification or “sleepwalking into some kind of ghastly, Orwellian, omniscient police state” as former Metropolitan Police commissioner Cressida Dick warned in 2019. It undermines the public’s trust in the police – which is already fragile. Last year the Baroness Casey Review warned that the Met, one of the UK’s most aggressive users of live facial recognition technology, “no longer can presume that it has the permission of the people of London to police them” owing to its institutional racism, misogyny and homophobia.

Allowing such a police force to use a high-risk technology with no clear foundation in law and no legal framework to regulate poses an unacceptable to risk to the public. It also puts the police at risk of further legal challenges, which the taxpayer would have to fund.

Ignoring this is untenable. The government must instruct Parliament to pass legislation urgently. Failure to do so can mean only one thing – that it does not care about the risks that live facial recognition technology poses to the public or to the police, and that it is not serious about wanting to be a leader in artificial intelligence.



Source link