Artificial Intelligence and Job Displacement
Globally, AI could affect almost 40% of jobs, according to the International Monetary Fund. “In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions,” says IMF Managing Director Kristalina Georgieva.
Some analysts predict, for example, that AI will accelerate the rate of automation; McKinsey estimates that activities accounting for up to 30% of hours currently worked across the U.S. economy could be automated by 2030, up from 21% before the recent introduction of “generative AI,” which facilitates the creation of text, images, and other content. The International Brotherhood of Teamsters warns that AI threatens jobs in warehousing, trucking, and manufacturing, with self-driving cars and trucks a particular concern for the union.
Other analysts, however, believe the primary impact of AI is less likely to be outright job automation, but rather a broad range of impacts where AI in various ways is used to augment jobs and automate specific tasks. In fact, generative AI may have the biggest impact on higher-paid workers. Researchers at the Harvard Business School and the Boston Consulting Group say that “in the coming decades millions of workers may need to be entirely reskilled—a fundamental and profoundly complex societal challenge that will require workers not only to acquire new skills but to use them to change occupations.”
COMMON TECH TERMS
Digital Technology: An umbrella term for computer-based products and solutions that help create, store, and manage data.
Algorithm: Set of instructions on how to complete certain tasks or solve specific problems.
Machine learning: A technique by which a computer can learn without being directly programmed with rules.
Artificial Intelligence (AI): The capacity of computers or other machines to exhibit or simulate intelligent behavior.
AI Model: An AI software program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.
Generative AI: A type of AI technology that generates high-quality text, images, and other content based on the data it is trained on.
Large Language Model (LLM): Model trained on vast amounts of data making it capable of understanding and generating natural language and other types of content to perform a wide range of tasks.
For millions of low-wage workers, AI technologies are likely to mean either the loss of a job or the need to retrain for a new job. McKinsey’s analysis suggests that AI could result in lower-wage workers being 14 times more likely to need to shift occupations than high-wage workers. Women are 1.5 times more likely to need to move into new occupations than men, according to McKinsey.
“The jobs in the two lowest wage quintiles are disproportionately held today by those with less education, women, and people of color,” McKinsey says. “Women are heavily represented in office support and customer service, which could shrink by about 3.7 million and 2.0 million jobs, respectively, by 2030. Similarly, Black and Hispanic workers are highly concentrated in some shrinking occupations within customer service, food services, and production work.”
One way to ameliorate potential AI-induced job losses is by giving workers a voice over how AI impacts their jobs, in particular through collective bargaining. In 2023, the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) negotiated contract resolutions with the Alliance of Motion Picture and Television Producers (AMPTP) to address the risks AI posed to workers. The Communications Workers of America (CWA) has a history of negotiating the use of surveillance in call centers due to concerns that such monitoring might be used to initiate disciplinary actions against workers.
Another way to ameliorate potential job displacement is through offering opportunities for reskilling. Yet, according to a recent report by BCG, the consulting firm, only 24% of polled companies make a clear connection between corporate AI strategy and reskilling efforts. Potentially effective approaches could include designing programs from an employee point of view and considering industry partnerships for reskilling efforts.
“AI has the potential to radically disrupt people’s economic lives in both positive and negative ways. It remains to be determined which of these we’ll see more of.”
In Europe, the EU Parliament in May 2024 passed the Artificial Intelligence Act, which purports to be “the first comprehensive regulation on AI by a major regulator anywhere.” The Act assigns applications of AI to four risk categories. Applications and systems that create an “unacceptable risk,” such as government-run social scoring of the type used in China, are banned. “High-risk applications” under the law include those affecting vocational training and employment; resume-scanning tools that rank job applicants, for example, are subject to specific legal requirements. Applications not explicitly banned or listed as high-risk are largely left unregulated. Some legal analysts say the EU aims for the AI Act to have the same ‘Brussels effect’ as its General Data Protection Regulation — “in other words, to have a significant impact on global markets and practices and to serve as a potential blueprint for other jurisdictions looking to implement AI legislation.”
Assessing the impact of AI on low-wage workers will be challenging. The Partnership on AI, a multi-stakeholder organization whose members include many leading tech companies, says, “AI has the potential to radically disrupt people’s economic lives in both positive and negative ways. It remains to be determined which of these we’ll see more of.”
In a 2023 publication, Guidelines for Shared Prosperity, the Partnership proposed evaluating AI systems through “opportunity signals’’ and “risk signals,” explaining that “profit-generating activities do not necessarily have to harm workers and communities, but some of them do. The presence of signals of risk indicates that an AI system being assessed, while possibly capable of generating profit for a narrow set of beneficiaries, is likely to do that at the expense of shared prosperity, and thus might be undesirable from the societal benefit perspective.”