BRIEFING: EU AI Act

The AI Act: Key Considerations for Employers

The AI Act introduces significant regulatory changes that will shape the use of artificial intelligence in the workplace across the EU. This article explores the implications of the AI Act for employers, highlighting the risk-based approach to AI regulation, key obligations for deployers and providers, and the steps companies can take to ensure compliance. With substantial penalties for non-compliance, understanding and preparing for the AI Act is crucial for businesses looking to leverage AI responsibly and effectively.

On July 12, 2024, the European Union officially published the Artificial Intelligence Act (AI Act) in the Official Journal, marking a significant milestone in the regulation of artificial intelligence within the EU. Effective from August 1, 2024, the AI Act will become directly applicable across all Member States, requiring no further national implementation measures. This comprehensive legislation will shape the future use of AI systems in various industries, including employment, where it will govern the deployment of AI in recruitment, decision-making, and disciplinary actions.

Article 113 of the AI Act provides a 24-month transitional period, making the legislation fully applicable by August 2, 2026. However, certain provisions will come into force sooner, including the prohibition of AI systems deemed to pose an “unacceptable risk,” which will be banned as early as February 2, 2025.

The AI Act is poised to have far-reaching implications for businesses within and outside the EU. The Act will apply to foreign companies providing, importing, distributing, or manufacturing AI systems in the EU market, making it essential for employers to understand and prepare for these new regulations. In the context of employment, the AI Act will set a precedent for how AI can be used in various HR functions, including recruitment, employee evaluation, and disciplinary actions. It may also influence future regulatory developments in other jurisdictions.

This article explores the key provisions of the AI Act relevant to employers, offering insights into what businesses need to consider as they integrate AI systems into their operations.

A Risk-Based Approach to AI Regulation

The AI Act adopts a risk-based approach to regulation, categorizing AI systems into four distinct risk categories:

  1. Unacceptable Risk: AI systems that pose an unacceptable risk to safety, human rights, or democratic processes are outright banned.
  2. High Risk: AI systems classified as high-risk are subject to strict regulatory requirements.
  3. Special Transparency Obligations: AI systems in this category must meet specific transparency and disclosure requirements.
  4. Minimal Risk: AI systems that pose minimal risk face limited regulation.

Prohibited: AI Systems with Unacceptable Risk

In the employment context, certain AI systems are deemed to present an unacceptable risk and will be banned under the AI Act. These include:

  • Emotion recognition systems: AI systems designed to infer the emotions of employees in the workplace, except for specific medical or safety purposes, such as monitoring a pilot’s fatigue.
  • Biometric categorization systems: AI systems that categorize individuals based on their biometric data to deduce or infer potentially discriminatory characteristics.

These prohibitions reflect the EU’s commitment to safeguarding fundamental rights and ensuring that AI systems do not perpetuate bias or discrimination in the workplace.

Strictly Regulated: AI Systems with High Risk

The high-risk category of AI systems will be particularly relevant for employers, as it includes many applications commonly used in HR processes, such as:

  • Recruitment and selection: AI systems used for targeting job advertisements, analyzing and filtering applications, and evaluating candidates.
  • Employee management: AI systems used for decisions on promotions, terminations, task allocation, and monitoring and evaluating employee performance and behavior.

While these high-risk AI systems are not banned, they are subject to stringent regulatory requirements to ensure their safe and ethical use. Importantly, certain AI applications commonly used in the workplace, such as AI systems for approving holiday requests or language assistance programs, are not classified as high-risk under the AI Act.

Understanding Employer Obligations under the AI Act

A critical aspect of the AI Act is determining whether employers are classified as deployers or providers of high-risk AI systems. This classification will dictate the specific obligations employers must fulfill.

  • Deployers: Employers who purchase and use existing AI systems without modifying them are typically classified as deployers. As deployers, employers must ensure that they use and monitor high-risk AI systems in accordance with the instructions for use and report any detected risks immediately.
  • Providers: Employers who customize or develop their own AI systems may be classified as providers, subjecting them to additional obligations, including ensuring that the AI system meets the general requirements for trustworthy AI and undergoing a conformity assessment procedure.

Obligations for Deployers of High-Risk AI Systems

Employers classified as deployers of high-risk AI systems will need to comply with several key obligations under the AI Act:

  • Data quality and representativeness: Ensuring that the input data used by the AI system is relevant, accurate, and sufficiently representative to avoid bias.
  • Human oversight: Establishing human oversight mechanisms with appropriately trained and empowered personnel to monitor and intervene in the operation of high-risk AI systems.
  • Logging and documentation: Keeping detailed logs automatically generated by the AI system and maintaining comprehensive documentation to demonstrate compliance with the AI Act.
  • Employee involvement: In certain Member States, such as Germany, employers must involve employee representatives in decisions related to the deployment of high-risk AI systems.

Additional Obligations for Providers of High-Risk AI Systems

If employers are classified as providers, they must meet additional obligations, including:

  • Trustworthy AI: Ensuring that the AI system meets the general requirements for trustworthy AI, including fairness, accountability, and transparency.
  • Conformity assessments: Undergoing conformity assessments to verify that the AI system complies with the AI Act’s requirements.
  • Quality management systems: Implementing quality management systems to ensure ongoing compliance and continuous improvement of the AI system.

Preparing for Compliance: What Employers Can Do Now

Employers can take proactive steps to prepare for the AI Act’s implementation by focusing on the following areas:

  • Risk assessment: Evaluate existing and planned AI systems to determine their risk classification and identify any potential compliance obligations under the AI Act.
  • Works council involvement: Develop a robust strategy for involving employee representatives in decisions related to AI deployment, particularly in jurisdictions where this is a legal requirement.
  • AI policy development: Establish clear policies governing the use of AI in the workplace, including guidelines for ethical AI use, data management, and human oversight.
  • Training and awareness: Provide training for employees and management on the basics of AI, the implications of the AI Act, and the importance of ethical AI practices.
  • AI oversight: Appoint an AI Officer or establish a dedicated team responsible for overseeing the deployment and use of AI systems, ensuring ongoing compliance with the AI Act.

Understanding Penalties for Non-Compliance

The AI Act sets out strict penalties for non-compliance, with significant financial implications for employers:

  • Prohibited AI systems: The use of an AI system classified as unacceptable risk can result in fines of up to EUR 35 million or up to 7% of the previous year’s global turnover.
  • High-risk AI systems: Breaches of obligations related to high-risk AI systems can lead to fines of up to EUR 15 million or up to 3% of the previous year’s global turnover.

These penalties underscore the importance of early preparation and compliance with the AI Act to avoid significant financial and reputational damage.

Strategic Implications and Next Steps

The AI Act represents a significant regulatory shift that will impact how employers use AI in the workplace. By adopting a proactive approach to compliance, employers can not only mitigate risks but also leverage AI technologies to enhance their HR processes and drive innovation.

ISAKCO is committed to helping businesses navigate the complexities of AI regulation. Our advisory services are designed to provide tailored guidance on the implementation of AI systems in compliance with the AI Act, ensuring that your organization is well-prepared for the future of AI in the workplace.

Key Takeaways

  • Risk-based approach: The AI Act categorizes AI systems based on risk, with stricter regulations for high-risk applications in the workplace.
  • Employer obligations: Employers must determine whether they are deployers or providers of AI systems to understand their specific obligations under the AI Act.
  • Preparation is key: Employers should assess their current and planned AI systems, develop robust AI policies, and involve employee representatives in AI deployment decisions.
  • Significant penalties: Non-compliance with the AI Act can result in substantial fines, making it essential for employers to prioritize compliance.
  • Strategic advantage: Proactively addressing the AI Act’s requirements can position your organization as a leader in ethical AI deployment, enhancing your reputation and competitive advantage.

Contact Us

To learn more about how ISAKCO can help your company with AI regulatory topics and AI implementation, visit our AI Regulation and Corporate Advisory pages, or get in touch with our team.

Start typing and press Enter to search