The entry into force of Regulation (EU) 2024/1689, effective from 1 August 2024, marks a historic moment for the governance of Artificial Intelligence in Europe. It is the first comprehensive EU-wide legislation that regulates the entire lifecycle of AI systems, with the goal of ensuring safe, transparent technologies that respect fundamental rights while promoting innovation within the single market. The European regulatory framework thus inaugurates a model of human-centric AI, where the human being remains at the core of technological decision-making processes.
Table of Contents
A Single Market with Uniform Rules
The regulation introduces standardized rules for all Member States, preventing fragmentation that could hinder internal competitiveness. Companies operating in the tech sector can now rely on shared regulations, reducing legal uncertainty and facilitating the cross-border adoption of AI-based solutions.
AI Developed with Respect for Fundamental Rights
The European model promotes trustworthy AI that protects individuals in line with the principles of democracy, freedom, and security. The regulation bans practices deemed “unacceptable,” such as behavioral manipulation techniques or social scoring systems, in alignment with the rights enshrined in the EU Charter of Fundamental Rights.
A Risk-Based Regulatory Approach
The legislation introduces a tiered approach, classifying AI applications into four risk levels: unacceptable, high, limited, and minimal. The degree of regulation and oversight increases proportionally to the potential impact of a system on individuals’ safety or rights. High-risk systems are subject to strict controls, while minimal-risk applications require only essential transparency obligations.
Tools for Innovation and Business Support
The legislator has introduced measures to support start-ups and SMEs, including regulatory sandboxes, simplified procedures, and protected experimentation environments. This enables the development of innovative AI solutions while maintaining high standards of compliance and safety.
Centralized Governance and European Coordination
To ensure uniform implementation of the regulation, the European AI Office is established, working alongside national competent authorities. This body will monitor technological developments, support the application of the rules, and coordinate oversight activities on market operators.
What’s New in 2025 for Generative AI
As of July 2025, three strategic tools have been introduced to guide the responsible use of generative AI, with a strong focus on transparency and safety:
- Guidelines on legal obligations, clearly defining responsibilities between developers and users.
- Voluntary code of good practices, aimed at protecting copyright, ensuring accurate information, and mitigating risks.
- Training-data summary model, outlining the nature and origin of the datasets used to train AI systems.
These measures support companies, researchers, and developers in designing reliable, explainable, and EU-compliant systems, fostering a balance between technological progress and the protection of fundamental rights.


