TUC publishes legislative proposal to protect workers from AI


The Trades Union Congress (TUC) has published a “ready-to-go” law for regulating artificial intelligence (AI) in the workplace, setting out a range of new legal rights and protections to manage the adverse effects of automated decision-making on workers.

Applying a risk-based approach similar to the one taken by the European Union in its recently passed AI Act, the TUC’s Artificial Intelligence (Employment and Regulation) Bill is largely concerned with the use of AI for “high-risk” decision-making, which it defines as when a system produces “legal effects or other similarly significant effects”.

The TUC said AI is being used throughout the economy to make important decisions about people, including whether they get a job, how they do their work, where they do it, and whether they are rewarded, disciplined or made redundant.  

It added the use of AI systems to algorithmically manage workers in this way is already having a “significant impact” on them, and is leading to discriminatory and unfair outcomes, a lack of control over data, loss of privacy and general work intensification.

“UK employment law is simply failing to keep pace with the rapid speed of technological change. We are losing the race to regulate AI in the workplace,” said TUC assistant general secretary Kate Bell.

UK employment law is simply failing to keep pace with the rapid speed of technological change. We are losing the race to regulate AI in the workplace
Kate Bell, TUC

“AI is already making life-changing calls in the workplace – including how people are hired, performance managed and fired. We urgently need to put new guardrails in place to protect workers from exploitation and discrimination. This should be a national priority.”

Adam Cantwell Corn, head of campaigns and policy at campaign group Connected by Data, which was involved in drafting the Artificial Intelligence (Employment and Regulation) Bill, added: “In the debate on how to make AI safer, we need to get beyond woolly ideas and turn values and principles into actionable rights and responsibilities. The bill does exactly this and lays down a key marker for what comes next.”

Although the UK government is now saying binding rules could be introduced down the line for the most high-risk AI systems, it has so far been reluctant to create laws for AI, stating on multiple occasions it will not legislate until the time is right.

Actionable rights and responsibilities

Focused on providing protections and rights for workers, employees, jobseekers and trade unions – as well as obligations for employers and prospective employers – key provisions of the bill include making employers carry out detailed Workplace AI Risk Assessments (WAIRAs) both pre- and post-deployment, create registers of the AI decision-making systems they have in operation, and reverse the burden of proof in employment cases to make it easier to prove AI discrimination at work.

Under the WAIRA framework, the bill would also establish consultation processes with workers, a statutory right for trade unions to be consulted before any high-risk deployments, and open up access to black box information about the systems that would place workers and unions in a better position to understand how the systems operate.

Other provisions include a complete ban on pseudo-scientific emotion recognition, running regulatory sandboxes to test new systems so AI development can continue in a safe environment, and a new audit defence for employers that would allow them to defend against discrimination claims if they meet rigorous auditing standards.

The bill would also grant a range of rights to workers, including the right to a personalised statement explaining how AI is making high-risk decisions about them, the right to human review of automated decisions, the right to disconnect, and a right for unions to be given the same data about workers that would be given to the AI system.

The TUC said these combined measures would go a long way to redressing the current imbalance of power over data at work. 

“Legal rules and strong regulation are urgently necessary to ensure the benefits of AI are fairly shared and its harms avoided,” said Robin Allen KC and Dee Masters from Cloisters in a joint statement. “Innumerable commentators have argued for the need to control AI at work, but before today none had previously done the heavy lifting necessary to draft the legislation.”

A multi-stakeholder, collaborative approach

While the text was drafted by the AI Law Consultancy at Cloisters Chambers with assistance from Cambridge University’s Minderoo Centre for Technology and Democracy, the bill itself was shaped by a special advisory committee set up by the TUC in September 2023.

In the debate on how to make AI safer, we need to get beyond woolly ideas and turn values and principles into actionable rights and responsibilities. The bill does exactly this and lays down a key marker for what comes next
Adam Cantwell Corn, Connected by Data

Filled with representatives from a diverse range of stakeholders – including the Ada Lovelace Institute, the Alan Turing Institute, Connected by Data, TechUK, the British Computer Society, United Tech and Allied Workers, GMB and cross-party MPs – the TUC stressed the importance of collaborative and multi-stakeholder approaches in AI policy development. 

It added that while there are already a range of laws that apply to the use of technology at work – including the UK General Data Protection Regulation (GDPR), the Information and Consultation Regulations, various health and safety rules, and the European Convention of Human Rights (ECHR) – there are still significant gaps in the current legal framework.

These include a lack of transparency and explainability, a lack of protection against discriminatory algorithms, an imbalance of power over data, and a lack of worker voice and consultation.

Another worker-focused AI bill was introduced by backbench Labour MP Mick Whitely in May 2023, which similarly focused on the need for meaningful consultation with workers about AI, the need for mandatory impact assessments and audits, and the creation of a formal right to disconnect.

While that bill had its first reading the same month, Parliament’s proroguing in October 2023 ahead of its second reading in November means it will make no further progress.

A separate AI bill was introduced by Conservative peer Lord Chris Holmes when Parliament returned, which stressed the need for “meaningful, long-term public engagement about the opportunities and risks presented by AI”.

Speaking to Computer Weekly in March 2024, Holmes said the UK government’s “wait and see” approach to regulating AI is not good enough when real harms are happening right now.

“People are already on the wrong end of AI decisions in recruitment, in shortlisting, in higher education, and not only might people find themselves on the wrong end of an AI decision, oftentimes, they may well not even know that is the case,” he said.

Speaking at an event ahead of the United Nation’s (UN) AI for Good Global Summit, coming up at the end of May 2024, the secretary-general of the International Telecommunication Union (ITU), Doreen Bogdan-Martin, said a major focus of the summit would be “moving from principles to implementation”.

She added that “standards are the cornerstone of AI”, but that these standards must be created collaboratively through multi-stakeholder platforms like the UN.



Source link