Social dialogue to agree on a common approach: The European Social Partners Framework Agreement on Digitalisation
The European Social Partners Framework Agreement on Digitalisation (BusinessEurope, SGI Europe, SMEunited and the ETUC, 2020) represents a collaborative effort between European cross-industry social partners to manage the impact of the digital transformation in the workplace. The agreement was signed by the European Trade Union Confederation (ETUC), Business Europe, SGI Europe (as CEEP - European Centre of Employers and Enterprises providing Public Services) and SMEunited in June 2020, and exemplifies the benefits of social dialogue in navigating the complexities of digitalisation.
The Framework Agreement on Digitalisation represents the shared commitment of the European cross-industry social partners to benefit from opportunities and handle challenges linked to digitalisation in the world of work.
Its key objectives include:
- Raising awareness and improving the understanding of employers, workers and their representatives on the opportunities and challenges in the world of work resulting from the digital transformation;
- Providing an action-oriented framework to encourage, guide and assist employers, workers and their representatives in devising measures and actions to seize these opportunities and dealing with the challenges, whilst taking into account existing initiatives, practices and collective agreements;
- Encouraging a partnership approach between employers, workers and their representatives;
- Supporting the development of a human-oriented approach to integrate digital technology in the world of work, to support/assist workers and enhance productivity.
To achieve its goals, the agreement outlines a jointly-managed dynamic circular process as a suitable way for its implementation process by respecting the roles and responsibilities of the different actors (see below: Digitalisation Partnership Process). It identifies four main issues to be addressed during the process:
- Digital skills and securing employment
- Modalities of connecting and disconnecting
- Artificial Intelligence (AI) and guaranteeing the human in control principle
- Respect of human dignity and surveillance
Although the agreement dates back to 2020, i.e. before the emergence of generative AI technologies, it puts a significant focus on AI systems, reflecting their profound impact on the workplace. The section of the agreement devoted to Artificial Intelligence (AI) and guaranteeing the human-in-control principle, identifies three components for trustworthy AI as follows:
- It should be lawful, fair, transparent, safe, and secure, complying with all applicable laws and regulations as well as fundamental rights and non-discrimination rules,
- It should follow agreed ethical standards, ensuring adherence to EU fundamental/human rights, equality and other ethical principles and,
- It should be robust and sustainable, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
The agreement emphasises the need for transparency in how AI systems are deployed in the workplace. This includes providing workers with clear information about the purpose, functioning, and implications of AI systems used in their roles. It also calls for the active participation of workers and their representatives in the implementation of AI technologies, including through consultations and negotiations on how AI will impact job roles, working conditions, and employment levels. Further, it advocates for mechanisms that ensure human oversight of AI decision-making processes. Social partners are encouraged to establish protocols where human intervention is possible to prevent, or correct unfair or erroneous decisions made by AI systems and prevent that they are biased or unfair. Additionally, the agreement places a strong emphasis on data governance, ensuring that the data used by AI systems is managed responsibly. This includes setting standards for data quality, privacy, and security, which are negotiated through social dialogue.
The implementation of the Framework Agreement on Digitalisation has involved several activities: national level dialogues, case studies and good practices, reports and feedback sessions. Social partners have initiated national-level dialogues to adapt customised strategies and practices to meet the specific needs of different industries and regions. Regular monitoring and evaluation have been integral to tracking progress, assessing the impact of the implemented measures, and making the necessary adjustments. The implementation process has generated numerous case studies and examples of good practices which serve as valuable resources for other sectors and countries looking to navigate the digital transformation through social dialogue. Regular reports and feedback sessions have been conducted to ensure transparency and continuous improvement providing opportunities for social partners to share experiences, address challenges, and showcase successes.
During the implementation process, several good practices have been observed. One example is the Netherlands Artificial Intelligence Coalition (NL AIC), which was established in October 2019, shortly before the agreement was signed. It is a joint initiative of the Confederation of Netherlands Industry and Employers (VNO-NCW), which is the largest employers’ organisation in the country, the Royal Association MKB-Nederland, the largest entrepreneurs’ organisation, the Ministry of Economic Affairs, the Dutch Organization for Applied Scientific Research (TNO), and several private sector companies, including Seedlink, Philips, Ahold Delhaize, IBM, and Topteam Dutch digital delta. It is a public-private partnership, currently comprising 486 participants, in which government, business, educational and research institutions and social organisations are committed to accelerating AI developments in the Netherlands and connecting AI initiatives. Its primary objective is to achieve a joint approach to AI implementation through a single national knowledge and innovation network, stimulating effective cooperation between the different research centres and avoiding fragmented AI initiatives. NL AIC also provides regional support and knowledge sharing among its participants via reference guides on AI applications, as well as education and training platforms.
The second good practice is the platform “Industrie 4.0 Österreich”, which organised multiple events on AI in the workplace regarding cybersecurity issues and the use of trustworthy AI. The platform was founded in 2015 as the platform for “intelligent production” assessing the future of production and the future of work and bringing together political, economic and academic actors, including social partners. Founding members of the platform are the Federal Ministry for Climate Action (BMK), further in alphabetical order: Federal Chamber of Labour (BAK), Association of the Electrical and Electronics Industry (FEEI), Association of the Metaltechnology Industry (FMTI), Federation of Austrian Industries (IV) and Production Union (PRO-GE). The objective of the platform is to make the best possible use of new technological developments for companies and employees, and to shape labour market transformations in a socially responsible manner. The platform is implementing different activities to support a dynamic development of the Austrian economy, to promote research and innovation, to contribute to good working conditions and to support employment. Currently, the platform is providing support to companies employing up to 2 999 employees through the European Digital Innovation Hub (EDIH), which is funded by the European Commission and the Austrian Ministry of Labour and Economy. It is one of four Austrian hubs and 151 European hubs, which provides technical expertise in digital design, digital production, cybersecurity and AI. Companies can benefit from funding to test new technologies, support to find funding for research and development, support for skills and training and networking opportunities.
Key Lessons
Fostering collaboration between workers and employers is key to ensure that digitalisation in the world of work is inclusive, fair, and beneficial to all stakeholders.
Cross-industry agreements between social partners can be a useful instrument to adopt a common approach to reply to the digitalisation of the world of work.
Social partners can make a significant contribution to the definition of components for the adoption of trustworthy AI.
The implementation of the Framework Agreement is an opportunity to organise national-level dialogues and other events, and track implementation and progress.
The European Union’s AI Act came into force on 1 August 2024 with provisions coming into operation gradually over the following 6 to 36 months. The Act aims to ensure the safe and trustworthy development, deployment, and use of AI across the EU. It adopts a risk-based approach to regulation, categorising AI systems into different levels of risk: unacceptable, high, limited and minimal risk.
1. Unacceptable risk: AI systems that pose a clear threat to the safety, livelihoods, and rights of people are banned outright. Examples include AI systems that manipulate human behaviour to the detriment of users.
2. High-risk: These are subject to strict requirements before they can be placed on the market. High-risk systems include AI applications in critical infrastructure, education, employment, and law enforcement. Requirements for high-risk AI systems include:
· Rigorous testing and documentation to ensure compliance with safety and performance standards.
· Transparency measures, such as clear information for users about the AI system’s capabilities and limitations.
· Robust data governance to ensure high-quality datasets free from biases.
3. Limited risk: Specific transparency obligations are introduced for the limited-risk AI systems that pose a risk of manipulation or deceit, associated with a lack of transparency in their use. For example, this includes generative AI systems classified as posing a limited risk.
4. Minimal risk: For systems deemed to pose minimal or no risk such as spam filters, the Act encourages adherence to voluntary codes of conduct.
The development of the EU AI Act involved extensive consultations with a broad range of stakeholders, including social partners and civil society groups. Initially, in February 2020, the European Commission started a public consultation process for ethical and legal requirements of AI. Trade unions advocated for stronger protections against job displacement and for the upskilling of workers to adapt to new AI-driven roles. Employer organisations emphasised the need for a balanced approach that would not stifle innovation while simultaneously ensuring worker protection. The final version of the Act includes specific provisions for AI systems used in employment contexts, requiring transparency on their use in recruitment, performance evaluation, and decision-making processes.
On 30 July 2024, the European Commission started another multi-stakeholder consultation for Trustworthy General-Purpose AI models under the AI Act.
Read the full report
Download the Global Deal Flagship Report 2022 for the full version of this case study, plus 12 others examining the work carried out by Global Deal partners and the voluntary commitments made to promote social dialogue in addressing global-labour market challenges.