Legislative preamble explaining the rationale behind each provision
Establishes the goal of developing trustworthy AI that respects fundamental rights, democracy, and the rule of law while enabling innovation.
Explains why the definition of AI system must be technology-neutral and future-proof, distinguishing AI from simpler software.
Clarifies that the AI Act does not cover AI systems used exclusively for military, defence or national security purposes, nor purely personal non-professional AI use.
Explains why certain AI applications should be absolutely prohibited as incompatible with Union values, focusing on manipulation, social scoring, and biometric surveillance.
Explains that AI systems embedded as safety components in regulated products should be high-risk because safety failures in such products can have severe consequences.
Justifies why eight categories of AI applications in Annex III are classified as high-risk due to their potential impact on fundamental rights, safety or livelihoods.
Explains the rationale for requiring transparency when people interact with AI systems, especially chatbots and synthetic media, to preserve informed decision-making.
Explains why general-purpose AI (GPAI) models require specific rules given that they can be integrated into many downstream applications and may have broad societal impacts.
Defines systemic risk for GPAI models and explains the 10^25 FLOPs threshold as a proxy for high-impact capability sufficient to justify additional obligations.
Describes the role of the AI Office as the central Union-level body for supervising GPAI models and coordinating national competent authorities.
Explains the rationale for the three-tier penalty structure and the proportionality requirements especially for SMEs and start-ups.
Explains why certain deployers of high-risk AI in public or quasi-public contexts must conduct a fundamental rights impact assessment before deployment.
Explains the purpose of AI regulatory sandboxes as tools to facilitate innovation while ensuring safety and compliance by allowing testing in a controlled environment.
Explains the staggered entry into force dates, allowing different provisions to apply progressively so that operators have adequate time to comply.
Clarifies that the AI Act supplements rather than replaces existing EU legislation such as GDPR, and explains how to handle overlapping obligations.