New EU Guidelines
Rules for AI Models
with Systemic Risk
The EU provides clarity: Official guidelines, published by the new European AI Office, are intended to help developers understand the complex rules of the AI law. The focus is on particularly powerful models, for which strict new obligations apply.

1. A Compass for Developers from the AI Office
The European Union, under the leadership of the newly created European AI Office, has published official guidelines for providers of general-purpose AI models (GPAI). The aim is to help artificial intelligence (AI) developers comply with the far-reaching AI law and provide clarity, especially for companies developing AI models with 'systemic risks'. Guidelines for providers of general-purpose AI models
The AI law, which already entered into force last year, categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI applications classified as unacceptable, such as social scoring or certain forms of remote biometric identification, are prohibited in the EU. The new guidelines also specify the conditions under which open-source models may be exempt from certain obligations to promote innovation and transparency.
2. What are AI Models with Systemic Risks?
AI models with systemic risks are defined as those trained with a computing power of more than 10^25 FLOPs (Floating Point Operations per Second). This includes well-known models such as GPT-4 from OpenAI, Gemini from Google, the newer Claude models from Anthropic, and Grok from xAI. These models are considered potentially far-reaching and with possible negative impacts on society.
3. New Obligations and Deadlines
Developers of AI models with systemic risks must fulfill a number of requirements. Non-compliance can lead to significant penalties, ranging from 7.5 million euros to 35 million euros or 1.5% to 7% of a company's global annual turnover.
Core Obligations at a Glance
- Model evaluations to identify likely systemic risks.
- Conducting adversarial testing for risk minimization.
- Reporting serious incidents to the EU AI Office and national authorities.
- Implementing appropriate cybersecurity measures.
The deadlines are staggered: The rules for new AI models with systemic risks apply from August 2, 2025. From August 2, 2026, penalties can be imposed for non-compliance. Providers of models that were already on the market before August 2025 must fully comply with the regulations by August 2, 2027.
4. Divided Reactions in the Industry
Reactions to the AI law and the new guidelines are mixed. Critics, including Meta, fear legal uncertainties and hindrance of innovation. Leading European companies had even demanded a suspension of regulation in the past.
On the other hand, there is also agreement. The AI Office promotes the 'General-Purpose AI Code of Practice', a code of conduct that serves as a practical tool for voluntary compliance with the rules. Companies like Mistral and OpenAI have signaled their willingness to participate. Supporters of the law see it as an important step to protect the safety and rights of consumers while creating a framework for trustworthy AI applications. 'General-Purpose AI Code of Practice'
Uncertainty from the AI Act?
The new regulations for systemic models show: Implementing the AI Act poses significant challenges. We help you navigate the regulatory jungle and ensure compliance.
Read Also
What is the EU AI Act? A Simple Explanation
Basics, objectives, and core concepts of the EU AI Regulation clearly explained for a quick introduction.
Read articleWhen Does What Apply? The Complete AI Regulation Timeline
All deadlines, dates, and transitional provisions of the EU AI Regulation clearly explained in an interactive timeline.
Read articleCriticism of the AI Act: SAP & Siemens Demand Restart
The heads of SAP and Siemens call the AI rules an 'innovation brake' and demand radical reform.
Read articleDid you like this article?
Let us know and join the discussion!
Kommentare
Noch keine Kommentare
Bitte melden Sie sich an, um einen Kommentar zu schreiben.