Digital Tech and Workers: The Case for Shareholder Engagement

As digital technologies proliferate, investors should actively engage with their portfolio companies to ensure the responsible development and deployment of technology with respect to workers. Companies that ignore the impact of these technologies on basic rights for workers — such as the right to privacy and non-discrimination, a living wage, and the right to form a union — expose themselves to significant legal, financial, and reputational risk. They may also be contributing to systemic risks that may undermine the broader health of the economy.

Among the concerns:

  • AI and job displacement. By 2030, activities that account for up to 30 percent of hours currently worked across the U.S. economy could be automated—up from 21% before generative AI burst upon the scene, according to McKinsey. Kristalina Georgieva, managing director of the International Monetary Fund predicts, “AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions.”

  • Algorithmic management systems that employ near-constant worker surveillance often negatively impact workers’ pay, safety, and privacy, sometimes inhibiting workers’ ability to organize. A New York Times investigation found that eight of the 10 largest private U.S. employers track the productivity metrics of individual workers, many in real-time.

  • The current “gig economy” in which tech platforms like Uber and Lyft treat workers as independent contractors, not employees, generally paying low wages, in many instances less than the minimum wage, while offering no right to sick time, freedom to organize, and other employee protections. A recent survey by the Economic Policy Institute found that 1 in 5 gig workers went hungry in the month prior to the survey because they could not afford enough to eat. Thirty percent of gig workers used SNAP, a federal food benefits program for low-income families, within a month of the survey.

  • The exploitation of data workers. Many digital technologies, especially AI-driven products, require vast amounts of data to enable them to function properly. More often than not, that data is generated by a huge and largely invisible global labor force comprised of millions of workers — sometimes referred to as “ghost workers” — who toil at highly stressful, low-wage, precarious work labeling data or moderating unseemly or harmful content. Many in less developed economies earn as little as $2 per hour. 

  • Automated hiring tools driven by algorithms have been shown to often reproduce, and sometimes amplify, biases and human errors they are supposed to eliminate. According to the U.S. Equal Employment Opportunity Commission, an estimated 83 percent of all U.S. employers and virtually all Fortune 500 companies use automated hiring tools. 

  • The “direct care” workforce, composed of home health care aides, personal care aides, and nursing assistants, is one of the fastest growing work sectors in the U.S. and provides a case study of how digital platforms are applied to new sectors of the workforce. An array of new technology-driven apps are increasingly being used to monitor and manage these workers and their clients, exposing sensitive personal information while in some cases lowering the standard of care.

Addressing these issues will require focused attention at multiple levels:

COMPANIES

Analysts say a company’s bottom line and stock performance are directly related to how well it manages “human capital,” the collective knowledge, attributes, skills, experience, and health of its workforce. The Human Capital Management Coalition — which includes 36 institutional investors representing over $10 trillion in assets — points to a growing body of research showing that “companies with effective human capital management perform better than those that manage their human capital poorly.” Schroders, a global asset manager, and CalPERS, the largest U.S. pension fund, in 2023 published joint research that indicates that human capital “can act as a clear driver of company productivity and profitability and that companies with durable management frameworks create stronger returns and value for investors.” 

Companies will need to adopt and disclose policy commitments to respecting human rights, including worker rights, and policies on developing and deploying digital technologies, including AI-powered tools, in a manner that is consistent with international human rights standards, including international labor standards. Human rights due diligence responsibilities are increasingly becoming mandatory for companies. In May 2024 the EU passed a law that requires big businesses, including U.S.-based companies doing business in Europe, to identify and address negative human rights and environmental impacts in their operations, including labor rights; non-compliance could result in fines of up to five percent of a company’s revenue. 

Diagram courtesy of UN B-Tech

The biggest obstacle to regulating data-driven technologies is that their use is largely hidden from both policymakers and workers. Without disclosure, job applicants won’t know why a hiring algorithm rejected their resume; truck drivers won’t know when they are being tracked by GPS; and workers won’t realize their health plan data is being sold.
— Researchers led by Annette Bernhardt at the UC Berkeley Labor Center

Experts interviewed for this report emphasized that increased disclosure by the companies developing and deploying the technologies about how they operate — and the implications for workers — will be critical for both policymakers and investors; requesting this data should be a priority for investor engagement. Researchers led by Annette Bernhardt at the UC Berkeley Labor Center say “The biggest obstacle to regulating data-driven technologies is that their use is largely hidden from both policymakers and workers. Without disclosure, job applicants won’t know why a hiring algorithm rejected their resume; truck drivers won’t know when they are being tracked by GPS; and workers won’t realize their health plan data is being sold.” Recently, Erik Gerding, Director of the U.S. Securities and Exchange Commission (SEC) Division of Corporation Finance shared that artificial intelligence will be a new disclosure priority.

WORKER VOICE

Addressing potential harms from the development and deployment of digital technologies will require that workers are incorporated into the design, creation, and implementation of digital technologies. A multi-disciplinary team of MIT researchers recently recommended that worker voices be included in four phases of AI development:

  1. defining the problems and opportunities to be addressed;

  2. designing the technical and work process features that need to be integrated;

  3. educating and training the workforce in the skills needed; and

  4. ensuring a fair transition and compensation for those whose jobs are affected.

Unions are already engaging on these issues. The Communications Workers of America (CWA), for example, has developed principles regarding AI which hold that its members and leadership “will not accept that the effects of AI systems are inevitable or pre-determined.” In addition to addressing potential risks to workers, the union says it will bargain to capture  a “fair share” of the economic gains from AI, “ensuring that working families see a rising standard of living and that these technologies do not contribute to the growth of economic inequality in this country.”

PUBLIC POLICY

It will be important for investors to be heard globally, including at the EU and on the federal level in the U.S., as well as in individual countries and local U.S. jurisdictions, advocating for human rights-based industry practices and helping to set standards that promote long-term, sustainable, and equitable economic solutions. Investors can play a key role in shaping legislation and regulation that fosters responsible policies and practices.  “Policies that reduce AI’s disruptive effects on labor markets are the same ones that encourage efficient and responsible AI investment,” says a report by the U.S. Council of Economic Advisers. “Encouraging innovation, reducing regulatory uncertainty, and supporting needed human capital investment are all important goals of AI policy.” 

Further, investors should press companies to be transparent about their lobbying activities and expenditures and to address potential misalignment between their publicly stated policies on protecting human rights and worker rights and their actual spending. The number of groups lobbying the U.S. federal government on AI reportedly nearly tripled in 2023, with major increases in lobbying also reported at the state level. Corporate lobbying on digital issues has also increased significantly in the EU in recent years, with multiple reports of companies seeking to “water down” EU legislation.