Lord introduces bill to regulate public sector AI and automation


Liberal Democrat peer Lord Clement-Jones has introduced a private members’ bill to regulate the use of artificial intelligence (AI), algorithms and automated decision-making technologies by public authorities, citing the need to avoid another Post Office scandal.

Under the proposals – brought by Lord Clement-Jones on 9 September – if “computer says no” to a benefit decision, immigration decision or similar, a citizen would have a right to access the information on why that happened so they have the opportunity to challenge it.

In instances where a citizen does choose to challenge an automated decision made about them, Clement-Jones’ Public Authority Algorithmic and Automated Decision-Making Systems Bill would also oblige the government to provide an independent dispute resolution service.

Public authorities in general would also be obliged to publish impact assessments of any automated or AI algorithms that influence decision-making processes (which would include a mandatory bias assessment to ensure compliance with the Equality Act and Human Rights Act), as well as maintain a transparency register to provide greater public information about how each system is being used.   

“The Post Office/Horizon scandal demonstrates the painful human cost when there aren’t proper checks in place to challenge these automated systems. Right now, there are no legal obligations on public authorities to be transparent about when and how they use these algorithms. I urge the government to support these changes,” said Lord Clement-Jones.

“Too often in the UK we legislate when the damage has already been done. We need to be proactive, not reactive, when it comes to protecting citizens and their interactions with new technologies. We need to be ahead of the game when it comes to regulating AI. We simply cannot risk another Horizon scandal.”

To increase the legibility of automated decisions, the Bill includes provisions to ensure that any systems deployed by public authorities are designed with automatic logging capabilities, so that its operation can be continuously monitored and interrogated.

It also contains further provisions prohibiting the procurement of systems that are incapable of being scrutinised, including where public authorities are hampered in their monitoring efforts by contractual or technical measures, and the intellectual property interests of suppliers.

Other Parliamentarians have previously brought similar AI-related private members bills, including Lord Christopher Holmes, who introduced the Artificial Intelligence [Regulation] Bill in November 2023 on the basis that the then-government’s “wait and see” approach to AI legislation would do more harm than good; and backbench Labour MP Mick Whitley, who introduced his worker-centric AI bill in May 2023 to deal with harmful uses of AI in the workplace.

In April 2024, the Trades Union Congress (TUC) also published a “ready-to-go” law for regulating AI in the workplace, which set out a range of new legal rights and protections to manage the adverse effects of automated decision-making on workers.

Since the previous Conservative government published its AI whitepaper in March 2023, there has been significant debate over whether the “agile, pro-innovation” framework it outlined for regulating AI technology is the right approach.

Under those proposals, the government would have relied on existing regulators to create tailored, context-specific rules that suit the ways the technology is being used in the sectors they scrutinise.

Following the whitepaper’s release, the government extensively promoted the need for AI safety on the basis that businesses will not adopt AI until they have confidence that the risks associated with the technology – from bias and discrimination to the impact on employment and justice outcomes  – are being effectively mitigated.

While the previous government spent its final months doubling down on this overall approach in its formal response to the whitepaper consultation from January 2024, claiming it will not legislate on AI until the time is right, it later said in February 2024 that there could be binding rules introduced down the line for the most high-risk AI systems.

Although the latest King’s Speech said the new Labour government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, there are no plans for AI-specific legislation.

The only mention of AI in the background briefing to the speech was as part of a Product Safety and Metrology Bill, which aims to respond “to new product risks and opportunities to enable the UK to keep pace with technological advances, such as AI”.

While private members’ bills rarely become law, they are often used as a mechanism to generate debates on important issues and test opinion in Parliament.



Source link