Originally published by Vanta.
Written by Herman Errico.
As artificial intelligence (AI) continues to revolutionize various sectors, ensuring it is developed and deployed in alignment with ethical standards and fundamental rights is critical for businesses that use it. The European Union's Artificial Intelligence Act (AI Act), formally adopted on March 13, 2024, addresses this critical necessity by establishing a comprehensive and detailed legal framework for AI systems within the EU. This landmark legislation classifies AI applications based on their risk levels and imposes stringent requirements for high-risk systems, including rigorous data governance, transparency, and accountability measures.
The AI Act is designed to foster trustworthy AI by mandating risk management protocols, continuous monitoring, and human oversight, which helps to safeguard public interests such as privacy, non-discrimination, and consumer protection. Additionally, it promotes innovation by creating a harmonized regulatory environment that provides legal clarity and facilitates market entry for AI developers and users.
By introducing this framework, the EU aims to set global standards for AI governance, ensuring that AI technologies contribute positively to society while minimizing potential harms. The AI Act also includes provisions for a regulatory sandbox, encouraging experimentation and collaboration between stakeholders to refine AI applications in a controlled environment. Through these measures, the AI Act not only protects fundamental rights but also supports the competitive edge of European AI innovation on the global stage.
In this blog, we’ll provide an overview of what the AI Act is, what’s included in it, and the timeline for implementing it.
What is the AI Act and what’s in it?
The AI Act is designed to create a uniform legal framework for the development, marketing, and use of AI systems across the European Union. It seeks to ensure that AI technologies are developed and utilized in a manner consistent with the EU’s values, including respect for fundamental rights, democracy, the rule of law, and environmental protection. The Act is structured to support the free movement of AI-based goods and services within the internal market while preventing fragmented national regulations.
The AI Act is organized into several key chapters, each addressing specific aspects of AI regulation:
- General Provisions (Chapter I): This chapter outlines the subject matter, scope, definitions, and AI literacy initiatives.
- Prohibited AI Practices (Chapter II): It lists AI practices that are outright banned due to their potential harm to individuals or society.
- High-Risk AI Systems (Chapter III): This chapter is divided into sections detailing the classification, requirements, obligations, and conformity assessments for high-risk AI systems.
- General-Purpose AI Models (Chapter V): It provides regulations specific to general-purpose AI models, including transparency, documentation, and incident reporting.
- Support for Innovation (Chapter VI): This chapter introduces measures to support AI innovation, including regulatory sandboxes and financial support.
- Regulatory Bodies and Cooperation (Chapter VII): It outlines the establishment of the European Artificial Intelligence Board and its tasks.
- Market Surveillance and Enforcement (Chapter IX): This chapter covers post-market monitoring, market surveillance, and cooperation between member states.
- Final Provisions (Chapter XIII): It includes amendments to existing regulations and directives, entry into force, and applicability.
Timeline of the AI Act implementation
The AI Act's implementation is phased over several years, with key chapters and articles coming into effect at different times. Here's a detailed timeline of the AI Act's applicability:
From February 2, 2025
- Chapters I and II: These foundational chapters will apply, covering general provisions and prohibited AI practices.some text
- Chapter I: Subject matter, scope, definitions, and AI literacy
- Chapter II: Prohibited AI practices
From August 2, 2025
- Chapter III Section 4, Chapter V, Chapter VII, Chapter XII, and Article 78: These chapters and articles focus on conformity assessments, general-purpose AI models, regulatory bodies, and certain general provisions.some text
- Chapter III Section 4: Conformity assessment, certificates, and other measures
- Chapter V: General-purpose AI models
- Chapter VII: Establishment and tasks of the European Artificial Intelligence Board
- Chapter XII: Penalties and administrative fines
- Article 78: Analysis and evaluation of AI systems
From August 2, 2026
- Chapter III Sections 1, 2, and 3, Chapters VI, VIII, IX, X, XI, XIII: These chapters encompass detailed requirements for high-risk AI systems, support for innovation, and market surveillance.some text
- Chapter III Section 1: Classification of high-risk AI systems
- Chapter III Section 2: Requirements for high-risk AI systems
- Chapter III Section 3: Obligations of providers and deployers of high-risk AI systems
- Chapter VI: Support for innovation, including AI regulatory sandboxes and financial support
- Chapter VIII: EU database for high-risk AI systems
- Chapter IX: Market surveillance and cooperation between member states
- Chapter X: Codes of conduct and guidelines from the Commission
- Chapter XI: Committee procedures
- Chapter XIII: Amendments to existing regulations and directives
From August 2, 2027
- Article 6(1): The classification rules for high-risk AI systems and corresponding obligations will come into effect.
Why this timeline?
The staggered implementation timeline of the AI Act is intentional and strategic, designed to ensure a smooth and effective transition for all stakeholders involved. Here are the key reasons for this phased approach:
- Complexity and scope: The AI Act covers a wide range of AI applications and stakeholders, from developers and providers to users and regulators. Implementing such comprehensive regulation all at once would be challenging and overwhelming. The phased approach allows for the gradual introduction of the new rules, giving stakeholders time to understand and adapt to each set of requirements.
- Preparatory time for stakeholders: Different chapters and provisions of the AI Act impose various obligations on AI system providers, deployers, and other parties. By rolling out the requirements in stages, the AI Act provides ample time for these stakeholders to prepare for compliance. This is particularly important for high-risk AI systems, which require more extensive documentation, risk management, and conformity assessments.
- Regulatory infrastructure: Establishing the necessary regulatory bodies and infrastructure, such as the European Artificial Intelligence Board, is a complex process that requires time. The phased implementation ensures that these bodies are in place and fully operational before the more demanding requirements of the AI Act take effect.
- Feedback and adjustment: A staggered rollout allows for the collection of feedback from early implementation phases. This feedback can be used to make necessary adjustments and improvements to the regulatory framework, ensuring it is effective and practical when fully implemented.
- Innovation support: The AI Act includes measures to support AI innovation, such as regulatory sandboxes and financial support. Phasing in these provisions helps foster a supportive environment for AI development while ensuring compliance with new regulations.
- Alignment with existing laws: The AI Act interacts with various existing regulations and directives. The phased implementation allows for a smoother integration and alignment with these laws, ensuring consistency and coherence in the overall regulatory landscape.
The phased implementation of the AI Act ensures a structured adaptation period for all stakeholders, allowing ample time for compliance and preparation. By establishing clear regulations and promoting a trustworthy AI ecosystem, the AI Act aims to make the EU a global leader in secure, ethical AI development.