Market Intelligence
Artificial Intelligence EU

EU Artificial Intelligence Act regulations to ensure safety and transparency

The EU AI Act went into force August 1 and will go into effect in stages over the next few years, aiming to ensure AI systems are used safely and transparently and in a traceable, non-discriminatory, and environmentally friendly manner.

Background

In 2021, the Commission issued a proposal on the EU AI Act, the first broad regulation on Artificial Intelligence. It aims to ensure AI systems are used safely and transparently and in a traceable, non-discriminatory, and environmentally friendly manner. The Act went into force August 1, and it will go into effect in stages over the next few years.

The Act separates AI into four risk categories:

Unacceptable-risk

AI includes systems that manipulate individuals or vulnerable groups; ones that classify people based on behavior, personal characteristics, or socio-economic status; and systems that use biometric identification or facial recognition. These systems will be banned in the EU, with some exceptions in law enforcement cases.

High-risk

Are AI systems used in critical infrastructure (such as transportation or medical devices), in educational or vocational training, in safety components of products, in essential private and public services (such as credit scoring for loans), in border control management, and in administration of justice.

The high-risk legislation is proving most relevant in this Act. For high-risk systems, the regulation requires users to establish a risk management system and conduct data governance. Developers must also provide technical documentation and instruction for downstream deployers as well as keep records for relevant national risks. The systems themselves must be designed to allow for human oversight, developers should create a quality management system that ensures accuracy, robustness, and cybersecurity.

Limited Risk

The regulation defines limited-risk AI as General Purpose AI (GPAI) or generative AI, and the legislation mostly addresses transparency issues. Namely, developers must disclose sources of information fed to the AI system and publish a summary of copyrighted data used for training, and the system must not create illegal content. Additionally, end-users must be informed they are interacting with AI, which is most relevant for chatbots and deepfakes.

For GPAI providers whose systems qualify as systemic risk (when the cumulative amount of compute used for its training is greater than 1025 floating point operations), the developers must also conduct model evaluations and adversarial testing. Any serious incidents should be tracked and reported, and developers should take all measures to ensure cybersecurity protections.

Minimal risk

The regulation puts applications such as spam filters or AI-enabled video game as minimal-risk AI systems and therefore out of scope of the Act. Most of the AI in use in the EU as of now falls under this category.

These obligations fall on providers and developers who put their AI system on the EU market or third country providers where the output of their system is used in the EU. This level of compliance puts the EU AI Act at risk of being extraterritorial. The decision-making will be split between the Commission and the AI Office, set up by the legislation.

The Commission has the power to introduce delegated acts on defining “AI system,” criteria that exempt systems from high risk, and the threshold of systemic risk. Future implementing acts could include approving codes of practice, establishing a panel of scientific experts in future decision-making, creating conditions for AI Office evaluations of compliance, producing operational rules, providing information for real-world testing, and offering specification where standards do not cover. The Commission will also provide guidance and clarification throughout the implementation process.

To implement the AI Act across the EU, the newly established AI Office will mainly support government bodies and enforce rules for GPAI. Based under DG Connect, this office will consist of five sections—AI and robotics, regulation and compliance, AI safety, innovation and policy coordination, AI for societal good—as well as a lead scientific advisor and an advisor for international affairs. The office will establish the Codes of Practice as well as design coherent application. It will develop benchmarks for classifying risk and monitoring compliance.

For enforcement, Member States are tasked with creating their own rules on penalties, including administrative fines. Each Member State will designate a national authority to oversee the implementation and enforcement of the Act.

Next Steps

The Act will come into law in stages. Starting after six months, the regulation bans all unacceptable-risk AI. After nine months, the AI Office must produce Codes of Practice for GPAI, and three months later, in August 2026, the GPAI regulation comes into force along with obligations on high-risk AI systems. Finally, by the end of 2030, regulations surrounding AI systems involved in freedom, security, and justice will apply. Throughout this timeline, the Commission will conduct annual reviews and provide opportunities for amendments. Given the recency of this legislation, companies are awaiting further clarification as the AI Office and the regulations start to get underway.

Additional Resources

CO_TA (europa.eu)
https://artificialintelligenceact.eu/
https://digital-strategy.ec.europa.eu/en/policies/ai-office
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai