UK workers exposed to risks of AI revolution, warns TUC
The UK government is failing to protect workers against the rapid adoption of artificial intelligence systems that will increasingly determine hiring and firing, pay and promotion, the Trades Union Congress warned on Tuesday.
Rapid advances in “generative” AI systems such as ChatGPT, a program that can create content indistinguishable from human output, have fuelled concern over the potential impact of new technology in the workplace.
But the TUC, a union umbrella body that serves as the voice of the UK’s labour movement, said AI-powered technologies were already widely used to make life-changing decisions across the economy.
Recent high-profile cases include an Amsterdam court’s ruling over the “robo-firing” of ride-hailing drivers for Uber and Ola Cabs, and a controversy in the UK over Royal Mail’s tracking of postal workers’ productivity.
But the TUC said AI systems were also widely used in recruitment, for example, to draw conclusions from candidates’ facial expressions and their tone of voice in video interviews.
It had also encountered teachers concerned that they were being monitored by systems originally introduced to track students’ performance. Meanwhile, call-centre workers reported that colleagues were routinely allocated calls by AI programs that were more likely to lead to a good outcome, and so attract a bonus.
“These technologies are often spoken about as the future of work. We have a whole body of evidence to show it’s widespread across employment relationships. These are existing urgent problems in the workplace and they have been for some time,” said Mary Towers, a policy officer at the TUC.
The rise of generative AI had “brought renewed urgency to the need for legislation”, she added.
The TUC argues that the government is failing to put in place the “guard rails” needed to protect workers as the adoption of AI-powered technologies spreads.
It described as “vague and flimsy” a government white paper published last month, which set out principles for existing regulators to consider in monitoring the use of AI in their sectors, but did not propose any new legislation or funding to help regulators implement these principles.
The UK’s approach, to “avoid heavy-handed legislation which could stifle innovation”, is in sharp contrast to that of the EU, which is drawing up a sweeping set of regulations that could soon represent the world’s most restrictive regime on the development of AI.
The TUC also said the government’s Data Protection and Digital Information Bill, which reached its second reading in parliament on Monday, would dilute important existing protections for workers.
One of the bill’s provisions would narrow current restrictions on the use of automated decision-making without meaningful human involvement, while another could limit the need for employers to give workers a say in the introduction of new technologies through an impact assessment process, the TUC said.
“On the one hand, ministers are refusing to properly regulate AI. And on the other hand, they are watering down important protections,” said Kate Bell, TUC assistant general secretary.
Robin Allen KC, a lawyer who in 2021 led a report on AI and employment rights commissioned by the TUC, said the need was urgent for “more money, more expertise, more cross-regulatory working, more urgent interventions, more control of AI”. Without these, he added, “the whole idea of any rights at work will become illusory”.
But a government spokesperson said, “This assessment is wrong,” arguing that AI was “set to drive growth and create new highly paid jobs throughout the UK, while allowing us to carry out our existing jobs more efficiently and safely”.
The government was “working with businesses and regulators to ensure AI is used safely and responsibly in business settings” and the Data Protection and Digital Information Bill included “strong safeguards” employers would be required to implement, the spokesperson added.