Labour MP Mick Whitley has introduced a bill to regulate the use of artificial intelligence (AI) in the workplace, with the goal of creating “a people-focused and rights-based approach” to ensure all workers are better protected against deployments of the technology.
Introduced by Whitley to Parliament on 17 May 2023 using the 10-minute motion rule – which allows backbench MPs to propose and make their case for new pieces of legislation – the bill’s provisions are rooted in three assumptions: that everyone should be free from discrimination at work; that workers should have a say in decisions affecting them; and that people have a right to know how their workplace is using the data it collects about them.
Although 10-minute rule motions rarely become law, they are often used as a mechanism to generate debates on an issue and test opinion in the Parliament. As Whitley’s bill received no objections, it has been listed for a second reading on 24 November 2023.
Whitley’s introduction of a worker-focused AI bill closely follows the UK government’s publication of its AI whitepaper, which outlines its regulatory proposals for creating an agile, “pro-innovation” framework around the technology.
While these proposals were generally welcomed by industry, both civil society groups and unions were less enthusiastic. The Trades Union Congress (TUC), for example, argued at the time that it only offers a series of “vague” and “flimsy” commitments.
“For too long, the rapid advances in artificial intelligence have gone unremarked upon by policymakers, but the speed of progress in this field is now gaining such momentum that it is impossible to ignore,” Whitley told the House of Commons.
“If we are going to make sure that AI works in all our interests, we need to see genuine collaboration between government and civic society, including the trade unions and the communities that we represent, and the fostering of an environment in which everyone’s voices and interests can be heard.
“The central purpose of the bill is simple: it seeks to protect the rights of those who are working alongside AI in their shops, offices, factories and services, and to preserve those rights for future generations to come. Fundamentally, it is about recognising the importance of people in a world increasingly run by machines.”
Key provisions
Building on this foundation, Whitley said key provisions of his bill include the introduction of a statutory duty for employers to meaningfully consult with employees and their trade unions before introducing AI into the workplace, and the strengthening of existing equalities law to prevent algorithmically induced discrimination.
This would include amending the Employment Rights Act 1996 to create a statutory right, enforceable in employment tribunals, so that workers are not subject to automated decisions based on inaccurate data, and reversing the burden of proof in discrimination claims so that employers are the ones that have to establish their AI did not discriminate.
Whitley added that the bill would also make equality impact audits a mandatory part of the data protection impact assessment (which employers would then be obliged to publish), as well as establish a universal and comprehensive right to human review of “high-risk” decisions made by AI, as well as a right to human contact throughout that decision-making process.
Regarding privacy, Whitley further added that “it would protect workers from intrusion into their private lives” through the creation of a formal “right to disconnect”, and require the government to publish statutory guidance for employers on how they can protect the privacy and work-life balances of their employees.
Elsewhere in his address to the House of Commons, Whitley said AI will force a reckoning with long-held assumptions about the labour market, and stressed the need to prepare for the breaking of old orthodoxies.
“That must mean considering the role that universal basic income has to play in a labour market that will see jobs becoming scarcer, as well as the necessity of investing in lifelong education and training in a world where few people can count on having a job for life,” he said.
TUC perspective
Throughout his speech, Whitley also pointed to the efforts of the TUC’s AI working group, which has published a number of reports on AI in the workplace.
This includes a March 2021 report titled Technology managing people: The worker experience, which warned of gaps in British law over use of AI at work; and a manifesto from the same month titled Dignity at work and the AI revolution, which outlines a similar set of principles to those described by Whitley.
Speaking with Computer Weekly Mary Towers, a policy officer at TUC, said while Whitley’s bill is still in the process of being drafted, “we’re definitely 100% supportive of the principles outlined” by Whitley.
“These aren’t future work issues, these are not just now issues, but issues that have been building up in the workplace for several years. There is now real urgency to doing something,” she said, adding the TUC’s manifesto proposals are “pragmatic ideas for what we could do really quickly, that would make things a lot better”.
The TUC previously spoke out against the UK government’s AI whitepaper proposals (which were generally welcomed by industry), arguing it offers only a series of “vague” and “flimsy” commitments.
Towers noted while “we’re by no means arguing our proposals are the endgame”, seeing them adopted into law would be a massive positive step forward.
She further said while there is a high degree of variation in how workers experience AI – for example, depending on the context of their roles, the tasks they have to preform, and whether they are in blue or white collar jobs – there are a number of common trends that need to be addressed.
“I would say the key implications are work intensification, so the use of technology to set unrealistic productivity targets; negative impacts on health and well-being, which has various different roots – one of which is the work intensification issue; and intensive monitoring and surveillance, which tends to give rise to a particular type of stress,” she said, adding this has also led to a blurring of work-home boundaries for those able to do their jobs remotely.
Towers further added that the technology is also entrenching existing inequality in society, which is “reflected through various different discriminatory decision-making patterns at work”, as well as leading to “unfair” outcomes where it fails to take into account the proper context.
“An example of that came out in our research is where someone might be being judged for the quality of their driving, and that they might be downgraded for a high rev or a sharp turn, but that might actually be demanded by the landscape,” she said. “That’s one example of a more general type of unfairness. It’s not racism, discrimination, but it’s unfairness.”
Given this context, Towers said that Whitley’s proposals around the need for meaningful consultation with workers and unions are particularly important, and that workers should be present in all stages of the value chain, including throughout its development, application and use: “Without a range of different voices being represented at the different points of the AI value chain, then it is inevitable that only one set of interests will be served by the technology.”
However, for this to be successful, Towers said “the appropriate apparatus of active consultation” needs to be put in place.
This could include, for example, algorithm or technology committees made up of workers that are resourced by the employer. In Germany, work councils have the right to a technical expert, who is paid for by the employer, who can then advise the group on how they can insert into the different stages of the of technology development and application.”
Towers added that while any work committee’s established will need people sitting on them who can “contribute effectively” to make it worthwhile (which necessitates proper funding and training), certain tech-related rights also need to written into collective bargaining agreements.
She said this should include a right to a trial or review that can be triggered when a technology starts being used beyond its originally stated purpose.
“That’s a really key piece of feedback from our affiliates – that often a piece of technology will be implemented for one reason, but then it’s got certain capabilities that go beyond the originally agreed purpose, and later on down the line an AI-powered tool is then used for other reasons,” she said, adding that consultation needs to be a “continuous process” that goes beyond a technology’s initial implementation, and must include “the right to say no”.
Ultimately, Towers said unions and workers would need to address power imbalances between employers and employees over data, which she added could be done by giving workers equal rights over the data collected about them, so that they can collectivise its use.
“Trade unions could then, for example, develop AI-powered tools themselves that could then be used to analyse data and potentially pick up on the unfair operation of algorithms at work,” she said. “Unions would then be able to pick up if there are discriminatory patterns at work, or whether for example there are problems with equal pay or the gender pay gap.”
Whitley’s bill comes amid sustained scrutiny of the UK’s approach to AI governance, which is also being looked at by a Parliamentary inquiry launched by the House of Commons Science and Technology Committee in October 2022.
A separate Parliamentary inquiry into AI-powered workplace surveillance conducted by the All-Party Parliamentary Group (APPG) for the Future of Work previously found in November 2021 that AI was being used to monitor and control workers with little accountability or transparency, and called for the creation of an Accountability for Algorithms Act.