Originally published by Diligent.
The EU AI Act comes into force on 1 August 2024. It is the world’s first comprehensive legislation designed to address artificial intelligence (AI) risks by establishing a set of rules and obligations aimed at safeguarding the health, safety, and fundamental rights of EU citizens. In doing so, it seeks to support responsible, innovative AI development and build trust between EU citizens and AI.
In this article, you'll discover:
- What the EU Artificial Intelligence Act is
- The EU AI Act timeline for implementation
- The implications of the EU AI Act for organisations
- Steps to promote the responsible and ethical use of AI
What is the EU AI Act? 4 main components
The EU AI Act categorises AI systems based on their potential to cause harm, following a risk-based approach. This risk classification is based on the intended purpose of the AI system. It ensures that AI technologies placed on the European market and used within the EU adhere to safety standards and respect human rights. By striking a delicate balance between fostering innovation and safeguarding fundamental rights, the Act sets an early standard for AI regulation that is likely to be echoed in legislation currently in draft in other jurisdictions such as the U.S. and China.
The EU AI Act has four main components:
- Requirements for high-risk AI systems and conformity assessments: The foundation of the Act is a risk-based classification system that determines the level of risk that an AI system could create in relation to the health, safety, or fundamental rights of individuals. The higher the risk, the more stringent the rules. Much of the legislation focuses on governing high-risk AI systems and most of the obligations fall on providers (developers) and deployers (users) of high-risk AI systems. The Act introduces a robust system of governance with enforcement powers at the EU level, ensuring effective oversight and accountability. A conformity assessment is required to demonstrate that the requirements for a high-risk AI system have been fulfilled.
- Prohibitions and safeguards: AI systems that pose a clear threat to people's safety, livelihoods, and rights will be prohibited. Examples include biometric categorisation, systems used for emotional recognition in the workplace and in educational institutions, and AI used for social scoring based on social behaviour or personal traits that can lead to discrimination. Notably, however, the EU AI Act allows for the use of remote biometric identification AI systems by law enforcement authorities in public spaces, but only subject to strict safeguards. Balancing security and privacy remains a priority.
- Fundamental rights impact assessment: Deployers of high-risk AI systems that are bodies governed by public law, or private entities providing public services and deployers of certain high-risk AI systems must conduct a fundamental rights impact assessment before putting an AI system into use. This assessment evaluates potential risks to privacy, non-discrimination, and other fundamental rights, and identifies measures to address those risks. This underscores the EU’s commitment to responsible AI deployment.
- Transparency obligations for providers and deployers: Providers shall ensure that all AI systems are designed in a way that informs people that they are interacting with an AI system and the outputs of the AI system are detectable as artificially generated or manipulated. Deployers have similar transparency obligations for both customers and employees.
EU AI Act implementation timeline
The Act recognises the rapid pace of AI evolution. Its implementation timeline reflects the urgency of preventing AI-related harms as quickly as possible.
- 1 August 2024: The EU AI Act comes into force
- 2 February 2025: Chapters I and II, covering prohibited AI uses, must be complied with. This includes enshrining in national laws the procedure for authorising and reporting on the use of real-time biometric AI systems in public spaces, and the requirement for national market surveillance and privacy supervisory bodies in member states to report to the Commission on incidents when the use of real-time biometric identification has been authorised.
- 2 August 2025: Chapter V, covering general-purpose AI models: providers of general-purpose AI models with high-impact capabilities and systemic risks must notify the EU Commission and comply with all Articles in Chapter V, including maintaining documentation, appointing authorised representatives established within the Union and co-operating with the AI Office.
- 1 August 2026: The full EU AI Act applies to all parties (providers and deployers) from this date.
The EU Artificial Intelligence Act puts responsible use at the forefront for business
Organisations operating within the EU or dealing with EU citizens must comply with the Act’s provisions. They need to assess their AI systems, ensure transparency and implement safeguards. While regulations may seem restrictive, they encourage responsible innovation. Businesses that prioritise ethical AI will attract investment and gain a competitive edge. Moreover, the EU AI Act sets a precedent for other jurisdictions, as well as establishing extra-territorial reach for organisations outside the EU providing AI systems for use within the EU. Organisations should be ready for this.
The Act primarily focuses on high-risk AI systems, with stringent guidelines applicable to AI systems used in critical infrastructure, healthcare, and transportation, and in any area where they play a role in safety. Employers deploying high-risk AI at the workplace must ensure human oversight, emphasising transparency and accountability to prevent unintended consequences. Additionally, the Act’s “measures in support of innovation” encourage thorough testing of AI systems in real-world scenarios to identify and mitigate potential risks before widespread deployment.
Users and consumers have the right to know when AI makes decisions affecting them, underscoring the transparency and accountability obligations of organisations. The EU AI Act emphasises responsible AI use, requiring companies to be transparent about how their systems operate, including data sources, algorithms, and decision-making processes. However, the treatment of biometric AI systems remains contentious, necessitating careful consideration of definitions and restrictions around biometric data.
What's next? Regulatory interplay and post-implementation actions
As technology evolves, more AI regulations are inevitable. Organisations should stay informed about emerging legislation and identify the interplay with existing legislation such as GDPR. The European Data Protection Board recently stated that, due to AI systems’ ability to process personal data throughout their lifecycle, the AI Act and the GDPR are “complementary and mutually reinforcing instruments.” Consequently, organisations should review the implications of the AI Act through the lens of their GDPR programmes.
The EU AI Act aims to ensure AI is ethically implemented. As it comes into force, organisations must consider what actions to take to smooth the glide path to implementation. Investing in AI ethics and compliance training, and collaborating with policymakers and industry peers to shape responsible AI codes of practice should be core focus areas.
The EU Artificial Intelligence Act is a testament to the European Union’s commitment to shaping AI’s future responsibly. The universal goal is to unlock AI’s potential while safeguarding our societies and fundamental rights, and Europe is leading the way.
EU AI Act compliance: Next steps
As the European Artificial Intelligence Act sets to take effect on 1 August 2024, organisations must gear up to meet its stringent requirements. Here's how you can get started:
- First, check out our EU AI Act cheat sheet for a high-level overview of timelines, risk categories and compliance obligations.
- Second, download our comprehensive white paper for meaningful insights at a deeper level.