At a glance
- AI Act will enter into force in 20 days.
- It will place obligations on employers who deploy AI within the EU.
- Certain employment-related uses of AI (such as in recruitment) are deemed to be high risk triggering obligations under the Act.
The AI Act has been published in the Official Journal today, 12 July 2024, and will enter into force in 20 days. This marks the beginning of a phased implementation process to put the rules and obligations of the AI Act into practice. For businesses and organisations, this means there is now a critical window to prepare for compliance.
The AI Act establishes a comprehensive framework for regulating the provision, deployment and usage of AI within the EU to address the complexities and potential risks associated with AI.
The AI Act is essentially a product safety regulation. It adopts a risk based approach by categorising AI systems based on their use case, establishing compliance requirements according to the level of risk they pose to users. This includes the introduction of bans on certain AI applications deemed unethical or harmful, along with detailed requirements for AI applications considered high-risk to manage potential threats effectively. Further, it outlines transparent guidelines for AI technologies designated with limited risk.
The AI Act affects all AI system operators including private and public organisations of all sizes and sectors that offer AI products or services on the EU market, including non-EU companies. The AI Act applies to:
- Providers introducing AI systems in the EU market, regardless of their geographic location.
- Providers and deployers of AI systems outside the EU, if the AI system’s output is used within the EU.
- Deployers of AI systems within the EU.
- Importers and distributors of AI systems in the EU market.
- Manufacturers placing products with embedded AI systems on the EU market under their trademark.
Implications for employers
Many employers will be in scope of the Act as 'deployers' of AI using an AI system under its authority. Usage of AI for recruitment, for instance analysis & filtering of job applications or evaluation of candidates, deciding on promotions, termination of work-related contractual relationships or allocation of tasks and monitoring or evaluation of the performance and behaviour of a worker is categorised as high risk. The algorithms and decision-making processes of these AI systems demand robust protections to mitigate potential harm.
The responsibilities of deployers under the Act are to:
- Apply the provider’s instructions for use of the AI system.
- Guarantee human oversight.
- Validate input data to ensure its suitability for intended use.
- Monitor AI system activity.
- Report any malfunctions, incidents, or risks to the AI system’s provider or distributor promptly.
- Save logs if under their control.
- Carry out a fundamental rights impact assessment.
Infringements of obligations in respect of high-risk AI can attract fines of up to EUR15million or for companies 3% of gross annual turnover.
For other AI applications with limited risk to individuals, the main requirement is to follow certain rules on transparency.
Next steps
The legislation contains areas that are yet to be fully defined. These are expected to be elaborated on through delegated and implementing acts, guidelines from the EU institutions, as well as harmonised standards developed by European Standardisation Organisations. As a result, businesses can expect to receive more detailed guidance in the near future. However, in the meantime employers should consider the following preparatory steps:
- Understand what is in scope. Conduct an audit of tools both used or planned considering both the use case and whether there is or might be in the future an EU connection to bring the tool within the territorial reach of the legislation.
- Identify what needs to change. AI systems may already be used in high-risk areas such as recruitment. Employers need to understand existing practices and processes, and whether anything needs to change to comply with the AI Act.
- Update policies and procedures. The obligations on deployers require a number of proactive steps to be taken – whether that’s the need for record keeping or ensuring that those who oversee the relevant tools are trained. Internal policies need to reflect these requirements.
- Training and awareness. Ensure those using AI systems understand how to use the tools properly and what their obligations are under the new rules.
- Conduct supply chain due diligence. Will new and existing tech providers be compliant? Do contracts need amending?
- Inform employee representatives. Under the Act, workers and their representatives must be informed that they are subject to an AI system.
- Understand the AI preferences. Knowing the extent to which an individual AI decision can be explained will be an important part of avoiding the risk of bias when using an AI system.