Opportunities for Shareholder Engagement

One of the persistent themes emerging from interviews with tech and labor experts for this report was that “these are early days” in the unfolding scenarios regarding the impact of technology on low-wage workers. The comments weren’t meant to minimize the real and growing threats to worker rights and livelihoods but, rather, to highlight a need to better understand how tech companies are deploying technology in the workplace and, hopefully, develop strategies to bring about better outcomes. 

For shareholders seeking to use their investments to achieve meaningful rights for workers, successful strategies will require a renewed commitment to core principles — i.e., continuing to press for essential worker rights such as freedom of association and collective bargaining, paid sick leave, workplace health and safety, and a living wage. It is essential that investors and companies center this work on the voices and experiences of those most affected, including frontline and essential workers, working people of color, workers with disabilities, migrant workers, women, and LGBTQIA+ workers.

SHAREHOLDER DIALOGUE WITH COMPANIES

Investors should engage companies in their portfolios to better understand the companies’ policies and practices regarding tech in the workplace. Among the questions investors should ask during their corporate engagements:

  • What governance structures are in place to address potential risks of digital technology use in the workplace?

  • What policies are in place to protect human rights and worker rights throughout the company’s supply/value chain? 

  • How does the company work to prevent, mitigate, and remedy any harms that come from the development or deployment of technology in the workforce? 

  • What digital technologies are currently deployed or will be deployed in the company’s workplace? Where does the company disclose this information to its stakeholders?

  • Does the company disclose both direct and indirect corporate lobbying activities related to the development and deployment of digital technologies, including AI?

RECOMMENDED ENGAGEMENT QUESTIONS

Investors have a responsibility to engage companies in their portfolio on the potential risks to workers from the development and deployment of digital technologies, including AI, in the workplace. Companies have a responsibility to respect human rights and provide access to appropriate and effective remedies for victims of business-related human rights abuse.

If companies fail to adopt, implement, and disclose robust governance policies and controls, backed by strong human rights principles, they place their workforce at risk, and may face operational risks, regulatory and litigation risk and revenue loss, as well as reputational risk. 

However, acting on the potential negative impacts can enhance worker satisfaction and retention, boost productivity, and avoid reputational damage. It aligns with the responsibility to respect international human rights and labor rights, and may provide a competitive advantage.

Investor engagements with portfolio companies should center on protecting workers from harm and ensuring respect of fundamental human rights, such as guarding against discrimination and invasion of privacy, and ensuring protection of workplace rights including the right to freedom of association and collective bargaining and the right to a safe and healthy working environment.

Governance structures in place to address potential risks of digital technology use in the workplace

  • Has the company adopted a public facing policy commitment to respecting human rights, including worker rights (right to freedom of association and collective bargaining, safe and healthy workplace, living wage, protections for whistleblowers, etc)?

  • Does the company have a policy on developing and deploying digital technologies, including AI-powered tools, in a manner that is consistent with international human rights standards, including international labor standards?

  • Does the board exercise direct oversight over risks related to digital technology in the workplace?

  • Do board members have requisite expertise on digital technology? Or do they have the opportunity to build knowledge and expertise (including trainings, advisory bodies, engagement with external experts, and continuous awareness programmes)? 

  • How does the company ensure and monitor that digital technology tools do not lead to discrimination against at-risk groups, negatively impact informed consent, privacy, freedom of association, freedom of expression, favorable working conditions, standard of living and human dignity and agency, in the product design, deployment, and implementation phases?

Assessment of risk from the use of digital technologies in the workplace

  • Does the company conduct ongoing human rights due diligence, both prior to implementation and throughout the lifecycle of the technology, to identify real and potential adverse impacts associated with the use of technology in the workplace in consultation with workers and other stakeholders, including independent human rights and labor rights experts?

  • Has the company assessed the potential impact of AI, including Generative AI on its workforce? 

  • How are decisions about the deployment of technology in the workplace made? How are the potential benefits and risks the technology will present to workers’ working conditions, economic security, mental and physical health, and overall well-being taken into account?

Actions the company is taking to address potential risks of digital technology use in the workplace

  • How does the company actively, and meaningfully engage with its workforce and labor unions on the development and deployment of technology in the workplace? How does it work to increase worker voice and well-being related to technology in the workplace?

  • Does the company give its workforce advanced notice regarding the development and deployment of digital technology in the workplace?

  • Does the company bargain or consult over the introduction of new technologies or the impacts of new technologies on workers? Does the company provide workers at all levels of seniority with training on unconscious bias, risk of discrimination, and prohibition of sexual harassment? Does the company have staff dedicated to promoting diversity, inclusion, equity, and accessibility within the workplace? 

Disclosure:

  • Is the company transparent about the ways in which technologies are used in the workplace (or will be used)? Where does the company disclose this information to its stakeholders? 

  • Does the company disclose clear information about its policies and practices regarding collection, use, sharing, and retention of information that could be used to identify, profile or track its users, including its workforce? 

  • Does the company publicly share information about its efforts to assess and address discriminatory effects and other unintended human rights impacts of AI-powered tools? 

  • Does the company give its workers the right to access, correct, and download their data?

  • Does the company disclose policies for how it handles all types of third-party requests (by authorities or any other parties) to share workers’ data?

  • Does the company disclose both direct and indirect corporate lobbying activities related to the development and deployment of digital technologies, including AI?

 Grievance mechanisms and access to remedy:

  • In case of violation or restriction on workers’ rights related to the development/deployment of digital technologies in the workplace, does the company guarantee access to appropriate public and/or private remedies, including effective and accessible grievance mechanisms?

  • When an algorithm flags a worker account for suspension, is there an opportunity to appeal to a human in all cases?

  • Does the company have a plan for the transition of workers displaced or affected by new AI tools and systems? This can include training and priority bidding for placement into other high-skilled and high-quality work opportunities and fair compensation. 

In addition, investors should encourage portfolio companies to implement the following guidance:

  • U.S. Department of Labor:

    • “Artificial Intelligence and Worker Well-being: Principles for Developers and Employers,” as directed by the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued in October 2023 

  • Equal Employment Opportunity Commission (EEOC)

    • In May 2023, the EEOC issued guidance on Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.

    • In May 2022, the EEOC issued guidance on The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.

INDEPENDENT INITIATIVES THAT CAN FURTHER INFORM SHAREHOLDER STRATEGIES

  • Draft principles for the responsible development, sale, deployment and use of digital technologies in the workplace. A framework for the responsible development, sale, and deployment of digital technologies in workplaces in respect of workers’ rights, autonomy, and dignity is being developed by several organizations. This framework will be concerned with digital technologies that are transforming the generation, collection, and use of data in the workplace and changing the ways workers are assessed, monitored, and managed. The framework would set the high-level standard that companies should adhere to regarding the use of digital technology in the workplace, whether they are legally required to or not. Initial participants are CDT, BSR, The Why Not Lab, AFL-CIO Tech Institute, and the Ford Foundation.

  • Ratings of company track records on digital technologies and workers, developed by FairWork, a civil society organization that seeks to “shine a light” on working conditions in the digital age by rating companies in multiple countries on five criteria: fair pay, fair conditions, fair contracts, fair management, and fair representation.

  • A tech-labor partnership on AI and the future of the workforce between Microsoft and the AFL-CIO. The partnership claims to be the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: “(1) sharing in-depth information with labor leaders and workers on AI technology trends, (2) incorporating worker perspectives and expertise in the development of AI technology, and (3) helping shape public policy that supports the technology skills and needs of frontline workers.”

  • Communications Workers of America (CWA) union and its recommendations for principles for bargaining and principles for public policy regarding tech in the workplace.

  • Continuing research about tech and workers offered by a range of organizations. In addition to the UC Berkeley Labor Center, quoted above, thoughtful analysis can be found at Data & Society, AI Now Institute, and the Partnership on AI

PUBLIC POLICY

In addition to direct engagement with companies, investors could also represent an influential voice in the formulation of public policy. With the projected displacement of jobs as the result of AI, for example, reskilling millions of workers for new positions is likely to be a major topic of discussion and debate for legislators and regulators. Shareholders may wish to weigh in on a subject that will be critical to the health of the U.S. economy. 

Investors should also seek to support The Stop Spying Bosses Act, the No Robot Bosses Act, and the Algorithmic Accountability Act of 2023, introduced in Congress in 2023, which were the Stop Spying Bosses Act would outlaw workplace surveillance for monitoring worker organizing or using AI to make behavioral predictions. The bill would also require employers who surveil employees to disclose their use of data collection so workers can be aware of how their data are being collected and used and would establish a Privacy and Technology Division at the U.S. Department of Labor to administer the law.

The No Robot Bosses Act would prohibit employers from relying exclusively on automated decision-making systems in making employment decisions such as hiring or firing workers. The bill would require testing and oversight of decision-making systems to ensure they do not have a discriminatory impact on workers, and when automated systems are used to help employers make a decision, employers must describe to workers how the system was used and allow workers or job applicants to dispute the system’s output with a human. The bill would establish the Technology and Worker Protection Division at the Department of Labor to regulate the use of automated decision systems in the workplace.

The Algorithmic Accountability Act of 2023 would direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes. It seeks to regulate how companies use AI to make “critical decisions” including those pertaining to “employment, workers management, or self-employment.” The impact assessments that would be required under this act would include meaningful stakeholder consultation with employees and other relevant internal and independent external stakeholders and identify and assess potential bias and discrimination. Companies would be required to provide ongoing training and education to employees and contractors.