The European AI Act has been officially published. This is a binding regulation that is directly applicable in all EU Member States. It affects both public and private organisations involved in the development, marketing or use of AI systems. This regulation introduces uniform rules that apply to developers, distributors and users of AI technology.
The regulation entered into force on 1 August 2024, but most obligations will only apply from 2 August 2026. From that date, companies will have to meet a series of strict requirements to deploy AI systems in a safe and responsible manner.
The purpose of the AI Regulation
The European Union has a clear goal with this regulation: to create a uniform legal framework for the development and use of AI. The rise of AI brings enormous benefits to the economy and society, such as chatbots, CV selection tools and automatic meeting summaries. At the same time, there are risks associated with the use of this technology, which the EU wants to limit.
Risk based classification
To manage the risks of AI systems, they are classified into risk categories. The higher the risk, the stricter the requirements the system must meet.
Unacceptable risks Some applications are banned completely, such as social scoring systems or certain forms of biometric identification. These applications are considered a serious threat to fundamental rights and freedoms.
High risks High-risk systems, such as applications that impact future career opportunities, livelihoods or employee rights, are permitted under strict conditions. Companies must demonstrate that they meet requirements such as:
Implementing a risk management system.
Providing comprehensive technical documentation.
Ensuring human control and transparency.
Maintaining logs about the operation of the system.
This is to prevent historical patterns of discrimination, for example on the basis of gender, ethnicity or sexual orientation, from being perpetuated.
Limited risks Applications with limited risk, such as AI-generated images or chatbots, require specific transparency requirements. For example, users must be clearly informed that they are communicating with an AI and not with a real person.
What does this mean for employers?
The regulation imposes significant obligations on employers and organisations. Some important points:
Inventory of AI systems : Employers must map which AI systems they and their partners use.
Risk analysis : Each system must be classified based on its risk. If a system is classified as prohibited, it must be phased out within six months of the AI Act coming into force (no later than 2 February 2025).
Employee training : Organizations must invest in AI literacy to ensure their employees understand and can work with AI.
Sanctions and enforcement
The penalties for non-compliance with the AI Regulation are severe. Companies that violate the rules risk high fines. This underlines the seriousness with which the EU wants to enforce compliance with these rules.
Although the regulation has already been adopted, the European AI Agency will produce further guidelines. These will clarify how companies should apply the regulation in practice.
Important dates and deadlines
February 2, 2025 : From this date, systems with an unacceptable risk may no longer be used.
August 2, 2025 : Rules for general-purpose AI models come into effect.
August 2, 2026 : Companies must comply with most of the obligations under the regulation.
August 2, 2027 : Specific rules for high-risk systems, governance and sanctions come into force.
The European AI Regulation is a milestone in regulating artificial intelligence and sets the tone for a future where innovation and safety go hand in hand. Companies would do well to start preparing now to be compliant with the new rules and avoid sanctions.
Would you like to stay informed about everything related to social-legal legislation or payroll? Contact Rovoco!