Algorithmic Management Systems

Digital technologies have dramatically expanded employers’ ability to monitor and manage their workforces—often without workers’ knowledge. Algorithmic management systems can be used to control worker schedules and the pace of work, track worker activity and movements, predict worker behavior, discriminate against workers, and use their likeness without permission. 

“Across the country, employers are increasingly using data and algorithms in ways that stand to have profound consequences for wages, working conditions, race and gender equity, and worker power,” say researchers at the UC Berkeley Labor Center. “How employers use these digital technologies is not always obvious or even visible to workers or policymakers.”

A critical element of algorithmic management systems is the monitoring and surveillance of workers in violation of their human rights. Workplace surveillance systems collect data about worker activities through a wide variety of means, including handheld devices, point-of-sale systems, mobile phones, fingerprint scanners, fitness apps, wellness apps, smart cameras, microphones, and body sensors, according to Aiha Nguyen, who explores the future of labor for Data & Society, a nonprofit research organization. Nguyen says algorithmic management systems enable “work speedups, employment insecurity and instability, a shift of risks of and costs from employers to workers, and an intensification of racial profiling and bias.”

A 2022 New York Times investigation found that eight of the ten largest private U.S. employers track the productivity metrics of individual workers, many in real-time. “In lower-paying jobs, the monitoring is already ubiquitous: not just at Amazon, where the second-by-second measurements became notorious, but also for Kroger cashiers, UPS drivers, and millions of others,” the Times reported.

Courtesy of United for Respect

Amazon’s algorithmic management systems, for example, monitor warehouse workers’ “time off task” (TOT) metrics, including bathroom breaks; the company’s TOT guidelines reportedly call for each manager to use a tracking tool every shift to identify a “top offender” who accumulated the most time off task in a shift as calculated by inactivity on their item scanner. A recent report by the University of Illinois Chicago’s Center for Urban Economic Development found that 69% of Amazon employees who participated in the survey said they’ve had to take unpaid time off due to pain or exhaustion suffered on the job in the past month, while 34% have had to do so three or more times.

Call centers are well known for their use of electronic monitoring: one survey of call center agents found that 70% felt monitoring was used primarily for disciplinary purposes. Office workers are now also often subjected to monitoring software on their computers that can download videos of their screens while they work; the software can also enable a computer’s webcam to take a picture of the employee every 10 minutes. Coworker.org, a worker advocacy organization, has compiled data on hundreds of tech products and services that “are collecting and aggregating data about workers at almost every step of the labor process.”

Employees perceive reduced job satisfaction and increased stress when monitored. On an organizational level, it is likely that there is no gain in employees’ performance but increased deviant behavior. These results question currently existing justifications for the use of electronic monitoring.
— a study by Siegel, Konig, and Lazar, 2022

“While workers may sign consent forms allowing employers to collect data, they have effectively relinquished control over what data is collected and how it is being used,” writes Data & Society’s Nguyen. “The data drives algorithmic management systems that have become a standard part of employment in almost all sectors of the U.S. economy.”

A report by the AI Now Institute concludes that the “deleterious effects” of surveillance tech far outweigh employer justifications for deploying it, noting that “algorithmic management ratchets up the devaluation of work, unequally distributes risks and privileges, threatens protected worker-led collective action, and leads to the destruction of individual and worker privacy.”

In fact, algorithmic management systems may work against management’s objectives. Academic researchers, writing in 2024 in the Harvard Business Review, found that employees who are algorithmically managed were less inclined to help or support colleagues than employees managed by people, offering roughly 20% less advice to their peers than workers managed by a person, and the quality of their advice was lower. A separate team of researchers conducted a meta-analysis of results from 70 different academic studies and concluded that electronic monitoring has a detrimental impact on employees as well as organizations. “Employees perceive reduced job satisfaction and increased stress when monitored,” the study found. “On an organizational level, it is likely that there is no gain in employees’ performance but increased deviant behavior. These results question currently existing justifications for the use of electronic monitoring.”

Workers have been left with little recourse; the current public discourse about data privacy assumes that the responsibility lies with the individuals themselves, without recognizing the ways that informed consent is difficult
to achieve.
— Aiha Nguyen | Data & Society

Many U.S. jurisdictions have adopted or are weighing legislation that could affect worker surveillance. In May 2024, Colorado’s governor signed a law that is said to be the first comprehensive legislation in the United States targeted toward “high-risk artificial intelligence systems.” The law requires that both developers and entities that deploy high-risk AI systems use reasonable care to prevent algorithmic discrimination. A coalition of more than two dozen organizations recently announced support for a proposed New York State law, which would provide protections to workers from electronic surveillance, automated management systems, and automated employment decision tools (AEDTs). The proposed legislation would:

  • Limit the circumstances under which employers can use technology to surveil workers and collect their data;

  • Require companies to provide meaningful notice and conduct independent impact assessments and validation studies when deploying automated management systems or AEDTs; and

  • Give workers greater input in the use of data-driven technologies in the workplace and labor market.

Advocates say more protections are clearly needed, with almost all calling for bans or severe limitations on worker surveillance. Worker consent for monitoring is not an option, they argue, given the power imbalance between an employer or platform and its workers. “Workers have been left with little recourse; the current public discourse about data privacy assumes that the responsibility lies with the individuals themselves, without recognizing the ways that informed consent is difficult to achieve,” says Data & Society’s Nguyen. “Surveillance itself is hard for workers to discern, meaning that they may not have the ability to challenge the surveillance itself nor algorithmically derived employment decisions, like discipline or firing.”

The AI Now Institute suggests there is “a clear case for bright-line rules that restrain the use of these tools altogether and at minimum create no-go zones around the most invasive forms of surveillance. Such a policy regime could help even out the power imbalances between workers, employers, and the companies that sell these tools.”