Skip to content
Social Dialogue for the Safe, Responsible and Ethical Adoption of Artificial Intelligence (AI) in Workplaces 01
Social Dialogue for the Safe Adoption of AI in Workplaces

Social Dialogue for the Safe, Responsible and Ethical Adoption of Artificial Intelligence (AI) in Workplaces

Introduction

To harness the benefits of AI and use it for good while mitigating its substantial risks, the government of the United States of America (U.S.) adopted the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (The White House, 2023)(AI Executive Order) in October 2023. This was followed by the release of the Department of Labor’s “AI Principles for Developers and Employers” when using AI in the workplace in May 2024, and accompanying “AI Best Practices” in October 2024. These initiatives underscore the importance of involving various stakeholders, including employers, workers, and their representatives in developing and implementing AI policies to ensure they are fair, transparent and beneficial to all.

The AI Executive Order represents a comprehensive approach to AI governance. This order aims to harness the potential of AI while safeguarding the public interest. It identifies eight guiding principles and priorities for the safe and responsible development and use of AI. These guiding principles and priorities include:

  • AI safety and security
  • Responsible innovation, competition, and collaboration
  • Commitment to supporting workers
  • Equity and civil rights
  • Consumer protections
  • Privacy and civil liberties
  • Managing the risks from the Federal Government’s own use of AI and increasing its internal capacity
  • Promoting responsible AI safety and security principles and actions in the world

The AI Executive Order recommends “taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations” for implementation of these principles.

Selected Priorities of the US Executive Order
  • Ensuring that AI systems are safe and secure is a top priority. The AI Executive Order requires that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. The U.S. government is developing standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. 
  • The Executive Order emphasises that privacy and civil liberties must be protected as AI continues advancing. This includes a focus on accelerating the development and use of privacy-preserving techniques, as well as strengthening privacy-preserving research and technologies.
  • The Executive Order highlights that irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing, and directs actions to advance equity and civil rights. 
  • The responsible development and use of AI requires a commitment to supporting workers. The Executive Order states that AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labour-force disruptions.

According to the Executive Order, consultation with social partners and other stakeholders is fundamental, including collective bargaining processes. Subsection 2 (c) emphasises the importance of supporting workers for the responsible development and use of AI. It highlights that “the critical next steps in AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation.” For example, the Order underscores the need to prepare the workforce for the changes brought by AI by emphasising that “all workers need a seat at the table including through collective bargaining”. It calls for the development of training programmes and educational initiatives to equip workers with the skills needed to thrive in an AI-driven economy. It warns against the possible risks of AI use in the workplace, such as worsening job quality, undue worker surveillance, and new health and safety risks.

Section 6 is devoted to “Supporting Workers” and subsection 6 (b) (i) assigns to the Secretary of Labor the duty to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits” in consultation with other agencies and with outside entities, including labour unions and workers. These principles and best practices provide a framework for the ethical and responsible use of AI in the workplace. The principles and best practices cover:

  • Centering worker empowerment: Ensuring workers and their representatives are able to provide input in the design and deployment of AI systems. This involves engaging workers in the development process to address their concerns and incorporate their insights.
  • Ethically developing AI: AI systems should be developed to protect workers and enhance job quality. This principle advocates for the ethical use of AI to improve working conditions and prevent exploitation.
  • Establishing AI governance and human oversight: Clear procedures for human oversight of AI systems must be established. This includes setting up governance structures that ensure accountability and transparency in the development of AI technologies.
  • Ensuring transparency: Employers should be transparent regarding the AI systems used in the workplace. This involves informing workers about how AI is being used and the impact it may have on their roles.
  • Protecting labour and employment rights: AI systems should not weaken workers’ rights. This principle emphasises the need to protect workers from unfair treatment and discrimination caused by AI.
  • Using AI to enable workers: AI should complement and enhance workers' capabilities. This involves designing AI systems that assist workers in their tasks and improve productivity.
  • Supporting workers impacted by AI: Employers should provide support and upskilling for workers affected by AI. This includes offering training programs and career development opportunities to help workers adapt to new roles.
  • Ensuring responsible use of worker data: Worker data used by AI should be handled responsibly and used only for legitimate purposes. This principle calls for robust data protection measures to safeguard workers' privacy.

Social dialogue is a key tool in implementing these principles and best practices. In developing the principles and best practices, the U.S. Department of Labor held listening sessions and met with developers, employers, government officials, unions, worker advocates, and AI researchers. AI system design and use can ensure more ethical and transparent AI use. In some notable examples, unions and employers have reached collective bargaining agreements that set protective measures around AI use, so that AI technologies enhance rather than diminish job quality.

In December 2023, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) and (Microsoft) announced a landmark partnership to create an open dialogue about AI in the workplace. This partnership is aimed at incorporating workers’ perspectives in AI development and implementation, sharing critical AI trends with labour leaders, and shaping public policy to support workers in an AI-driven economy. The collaboration includes training programmes, policy advocacy, and mechanisms for direct feedback from workers to AI developers.

In 2023, the Writers Guild of America and the Alliance of Motion Picture and Television Producers, as well as the Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA) and Alliance of Motion Picture and Television Producers, reached agreements that include important protections related to the use of artificial intelligence in the workplace. Union and employer representatives negotiated provisions on issues including disclosure and consent related to the use of AI, to reach contracts that were ultimately ratified by union members.

These efforts have resulted in the development of AI governance frameworks that prioritise human oversight and accountability, mitigating potential risks associated with AI deployment. The U.S. Executive Order on AI and the accompanying principles and best practices developed by the Department of Labor highlight how an ethical and responsible AI implementation can be fostered by cooperation among stakeholders.

Key Lessons

A comprehensive approach is needed for a safe, responsible and ethical adoption of AI in the workplace which combines existing policies, legislation and guidelines with the new ones.

To mitigate AI’s potential harms to employees’ well-being and maximise its potential benefits, consultation of various stakeholders, including social partners is key.

To ensure that workers are prepared and that AI leads to job augmentation, it is key to consult social partners on changing skills needs so that workers can succeed as tasks and roles change.

Collective bargaining agreements can set protective measures around AI use, so that AI technologies enhance rather than diminish job quality.

Read the full report

Download the Global Deal Flagship Report 2022 for the full version of this case study, plus 12 others examining the work carried out by Global Deal partners and the voluntary commitments made to promote social dialogue in addressing global-labour market challenges.

Download full report 3.2mb PDF