Worker-protection laws aren’t ready for an automated future
This article is republished from The Conversation under a Creative Commons license.
August 28, 2019 1.58pm BST
Science fiction has long imagined a future in which humans constantly interact with robots and intelligent machines. This future is already happening in warehouses and manufacturing businesses. Other workers use virtual or augmented reality as part of their employment training, to assist them in performing their job or to interact with clients. And lots of workers are under automated surveillance from their employers.
All that automation yields data that can be used to analyze workers’ performance. Those analyses, whether done by humans or software programs, may affect who is hired, fired, promoted and given raises. Some artificial intelligence programs can mine and manipulate the data to predict future actions, such as who is likely to quit their job, or to diagnose medical conditions.
If your job doesn’t currently involve these types of technologies, it likely will in the very near future. This worries me – a labor and employment law scholar who researches the role of technology in the workplace – because unless significant changes are made to American workplace laws, these sorts of surveillance and privacy invasions will be perfectly legal.
New technology disrupting old workplace laws
The United States’ regulation of the workplace has long been an outlier among much of the world. Especially for private, nonunionized workers, the U.S. largely allows companies and workers to figure out the terms and conditions of work on their own.
In general, for all but the most in-demand workers or those at the highest corporate levels, the lack of regulation means companies can behave however they want – although they are subject to laws preventing discrimination, setting minimum wages, requiring overtime pay and ensuring worker safety.
But most of those laws are decades old and are rarely updated. They certainly haven’t kept up with technological advances, the increase in temporary or “gig” work and other changes in the economy. Faced with these new challenges, the old laws leave many workers without adequate protections against workplace abuses, or even totally exclude some workers from any protections at all. For instance, two Trump administration agencies have recently declared that Uber drivers are not employees, and therefore not entitled to minimum wage, overtime or the right to engage in collective action such as joining a union.
Emerging technologies like artificial intelligence, robotics, virtual reality and advanced monitoring systems have already begun altering workplaces in fundamental ways that may soon become impossible to ignore. That progress highlights the need for meaningful changes to employment laws.
Consider Uber drivers
Like other companies in what has been called the “gig economy,” Uber has spent considerable amounts of money and time litigating and lobbying to protect regulations classifying its drivers as independent contractors, rather than employees. Uber set its fifth annual federal lobbying record in 2018, spending US$2.3 million on issues including keeping its drivers from being classified as employees.
The distinction is a crucial one. Uber does not have to pay employment taxes – or unemployment insurance premiums – on independent contractors. In addition, nonemployees are completely excluded from any workplace protection laws. These workers are not entitled to a minimum wage or overtime; they can be discriminated against based on their race, sex, religion, color, national origin, age, disability and military status; they lack the right to unionize; and they are not entitled to a safe working environment.
Companies have tried to classify workers as independent contractors ever since there have been workplace laws, but technology has greatly expanded companies’ ability to hire labor that blurs the lines between employees and independent contractors.
Employees aren’t protected, either
Even for workers who are considered employees, technology allows employers to take advantage of the gaps in workplace laws like never before. Many workers already use computers, smartphones and other equipment that allows employers to monitor their activity and location, even when off duty.
And emerging technology permits far greater privacy intrusions. For instance, some employers already have badges that track and monitor workers’ movements and conversations. Japanese employers use technology to monitor workers’ eyelid movements and lower the room temperature if the system identifies signs of drowsiness.
Another company implanted RFID chips into the arms of employee “volunteers.” The purpose was to make it easier for workers to open doors, log in to their computers, and purchase items from a break room, but a person with an RFID implant can be tracked 24 hours a day. Also, RFID chips are susceptible to unauthorized access or “skimming” by thieves who are merely physically close to the chip.
No privacy protections for workers
The monitoring that’s possible now will seem simplistic compared to what’s coming: a future in which robotics and other technologies capture huge amounts of personal information to feed artificial intelligence software that learns which metrics are associated with things such as workers’ moods and energy levels, or even diseases like depression.
One health care analytic firm, whose clients include some of the biggest employers in the country, already uses workers’ internet search histories and medical insurance claims to predict who is at risk of getting diabetes or considering becoming pregnant. The company says it provides only summary information to clients, such as the number of women in a workplace who are trying to have children, but in most instances it can probably legally identify specific workers.
Except for some narrow exceptions – like in bathrooms and other specific areas where workers can expect to be in relative privacy – private-sector employees have virtually no way, nor any legal right, to opt out of this sort of monitoring. They may not even be informed that it is occurring. Public-sector employees have more protection, thanks to the Fourth Amendment’s prohibition against unreasonable searches, but in government workplaces the scope of that prohibition is quite narrow.
In contrast to the almost total lack of privacy laws protecting workers, employment discrimination laws – while far from perfect – can provide some important protections for employees. But those laws have already faced criticism for their overly simplistic and limited view of what constitutes discrimination, which makes it very difficult for victims to file and win lawsuits or obtain meaningful settlements. Emerging technology, particularly AI, will exacerbate this problem.
AI software programs used in the hiring process are marketed as eliminating or reducing biased human decision-making. In fact, they can create more bias, because these systems depend on large collections of data, which can be biased themselves.
For instance, Amazon recently abandoned a multi-year project to develop an AI hiring program because it kept discriminating against women. Apparently, the AI program learned from Amazon’s male-dominated workforce that being a man was associated with being a good worker. To its credit, Amazon never used the program for actual hiring decisions, but what about employers who lack the resources, knowledge or desire to identify biased AI?
The laws about discrimination based on computer algorithms are unclear, just as other technologies stretch employment laws and regulations well beyond their clear applications. Without an update to the rules, more workers will continue to fall outside traditional worker protections – and may even be unaware how vulnerable they really are.