Getting your Trinity Audio player ready... |
This article is part of our series on AI regulations around the world. Learn more about the regulatory landscape with our rundown on AI legislation in the US, Canada, the UK, and Asia-Pacific.
Similar to how the European Union led the standardization of data and privacy requirements with GDPR, it is ahead of other governments in adopting a detailed framework for AI regulation.
The EU Artificial Intelligence Act, adopted in 2024, is the world’s first sweeping legal framework explicitly designed to regulate machine learning use cases.
It takes a risk-based approach to protecting fundamental rights, democratic values, and environmental sustainability, and is widely expected to become the global benchmark for AI legislation.
The Act applies to any organization that uses, markets, or has an impact on AI systems in the EU, regardless of the company’s location. For instance, US-based companies whose AI models interact with EU users are in the scope of the EU AI Act.
While some provisions are already in effect, most compliance obligations roll out through 2025 and 2026. This article outlines what’s allowed, what’s banned, what the law requires, and what non-compliance could cost.
What AI use cases are allowed under the AI Act?
The EU AI Act breaks AI use cases into four categories based on the potential harm of misuse and regulates each tier with a separate set of requirements.
- Minimal risk systems. Most supervised machine learning (ML) systems developed over the past decade (e.g., email spam filters or Netflix content recommenders) fall into this category. Since the development of these systems has a long track record, is generally well-documented, and can be reverse-engineered for an audit, minimal-risk AI solutions are not subject to compliance.
- Limited-risk systems, like customer service chatbots, interact with users but do not meaningfully impact their rights, safety, or livelihoods. They’re subject to lighter-touch rules, mainly around transparency. To stay compliant, AI companies need to clearly notify users they’re interacting with AI. They will also need to ensure compliance with broader regulations on data privacy (e.g., GDPR) and fair competition (e.g., Digital Markets Act).
- High-risk systems are those in areas where the stakes are higher: Hiring, credit scoring, education, healthcare, and law enforcement. These AI systems are allowed on the market, but regulators impose strict obligations around transparency, risk controls, human oversight, and technical reliability.
- Unacceptable risk systems are banned outright. The EU AI regulations forbid using AI for social scoring, large-scale public opinion manipulations, or other use cases that may harm people’s safety or legal rights.
Below is the summary of the risk stratification system adopted by the EU AI Act.

The EU AI Act compliance requirements
Except for minimal-risk AI use cases, machine learning models that target European users need to meet transparency, copyright, and safety requirements to be approved by the regulators.
The EU AI Act requirements are still a topic of discussion.
The first set of requirements was published in the third draft of the “General-Purpose AI Code of Practice” on March 11, 2025. A separate Q&A document, published on March 14, 2025, provides further clarifications and guidance.
Following stakeholder feedback, the final version of the Code is scheduled for release in May 2025. Once approved, it will serve as a compliance standard for general-purpose AI model providers.
For the time being, compliance requirements can be divided into three categories: transparency obligations, copyright requirements, and safety practices.
Transparency obligations:
- Create and maintain up-to-date model documentation, based on the standardized Model Documentation Form
- Provide relevant information to users and the AI Office upon request
- Disclose any external outputs to model development, including those from governmental entities
Copyright compliance:
- Develop, implement, and regularly update a copyright policy that aligns with EU regulations
- Only content that is lawfully accessible can be reproduced and extracted
- Honor the rights and reservations expressed in machine-readable formats, such as the robots.txt protocol, and make best efforts to comply with other appropriate protocols
Safety and security:
- Risk assessments must be conducted prior to model deployment, and risk mitigation strategies must be implemented
- Create workflows for incident reporting to regulators
- AI systems must be protected against breaches, unauthorized access, and other threats
- Providers should publicly share information about systemic risks associated with their AI models
The EU AI Act and large-language models
The EU AI Act does not carve out special rules for generative AI or large language models. They’re treated like any other system—classified by risk, with compliance tied to what the model does and who it affects.
In most cases, vendors must meet baseline transparency requirements, i.e., let users know when they’re interacting with AI or seeing AI-generated content, whether it’s a chatbot reply, a generated image, or anything in between.
Keeping detailed technical documentation will help AI teams stay compliant. AI organizations should make sure they keep records on the following characteristics of foundational models.
- LLM’s capabilities
- Datasets characteristics
- Training methods
- Intended use cases
- Performance benchmarks
- Safety and cybersecurity measures
The Xenoss generative AI team recommends implementing safeguards during both the development and deployment stages.
Xenoss engineers recently helped a global marketing and advertising holding company deploy an LLM-based knowledge assistant for its workforce. Using Retrieval-Augmented Generation (RAG) techniques, we fine-tuned the Llama 3.1 8B model to provide contextually relevant responses with a 95% accuracy rate.
To ensure model security and compliance, machine learning engineers implemented a specialized quality control agent (Guardrails) for each response verification, along with a global automated quality assessment system (RAGAS) that performs:
- Instant response evaluations for relevance and accuracy
- Model error analysis and search strategy optimization
- Ongoing model performance monitoring
The EU AI Act non-compliance fines
The fines under the EU AI Act will not be active until August 2026, which gives organizations more time to assess and ensure compliance with their AI systems.
That said, once enforced, the EU AI Act’s fines are some of the highest in the history of tech regulations. Companies that engage in outright harmful activities (e.g., manipulating public opinion) risk a 35 million euro ($37 million) fine.
Here is a more detailed breakdown of the EU AI Act non-compliance fines.
- 1% of the worldwide turnover (up to €7.5 million) for providing regulators with false, incomplete, or deceptive information about an AI system’s design, capabilities, risks, compliance measures, or intended usage.
- 3% of worldwide turnover (up to €15 million) for failure to meet compliance for high-risk AI use cases, e.g., using a biased or incomplete training dataset or ignoring post-launch monitoring obligations
- 7% of worldwide turnover (up to €35 million) for engaging in prohibited activities (i.e., public opinion manipulation, predictive policing) or incurring a major data breach (e.g., leaking sensitive customer data due to a prompt injection attack).

Regulators will determine final penalties based on severity, scope, and operational footprint. While smaller companies may benefit from flat cap protections (whichever is lower: percentage or set amount), enforcement agencies are expected to take violations seriously, especially for systems that scale across borders.
Implementation timeline
The EU AI Act was published in July 2024 and took effect on 1 August 2024, but its full implementation is phased.

- February 2, 2025: The ban on unacceptable AI practices came into effect.
- August 2, 2025: The first compliance requirements kick in for high-risk systems: Appointment of AI compliance officers, transparency and labeling rules for general-purpose AI and foundation models, and notification requirements for high-risk AI uses.
- August 2, 2026: Full obligations come into effect, including conformity assessments, documentation, and risk management for high-risk AI systems.
- August 2, 2027: Extended compliance deadline for high-risk AI systems already placed on the market before the Act enters into force, giving companies extra time to upgrade existing systems.
What really changes for AI companies targeting European users after the EU AI Act
The EU AI Act has been under the public eye for years now as “the GDPR for AI”. Now that it has become law, AI team leaders may wonder how the new regulations will affect their processes.
Realistically, the EU AI Act is unlikely to start a major crackdown on AI applications now that every world superpower is trying to get ahead in machine learning. However, accountability will get tighter – here is what that means.
- Many AI use cases will get banned. The EU AI Act describes high-risk AI applications in a way that makes them seem deliberately malicious and extreme. In reality, a broader range of use cases can fall under this umbrella: Internet scrapers, tools that use facial recognition for non-medical purposes, and any platform that collects sensitive data as part of the dataset without explicit consent. It is important to assess AI applications for gray areas and eliminate the features within them before the company is liable for fines.
- Labeling AI systems will make it easier to tell AI-generated content apart from human work. This will help fight misinformation, create new opportunities for watermarking, and impose added pressure on developers to label AI data. It’s worth noting, however, that developing a 100% accurate AI content detection algorithm is a noted machine learning challenge, and it is still unclear which tools European regulators will use to solve the problem.
- Transparency and explainability are becoming important. Transparency is one of the focus areas of the EU AI Act. AI developers will be held accountable for the tools and data used to train algorithms, as well as the ability to reverse-engineer ML models. Data management practices have been a weak spot even for industry leaders like Elon Musk’s X.ai and Sam Altman’s OpenAI, so one can expect a heavy regulatory crackdown on the basis of poor data handling in the coming years.
Bottom line
The EU AI Act marks a pivotal step in shaping the future landscape of artificial intelligence across Europe. As the regulation aims to balance innovation with the protection of fundamental rights and public safety, organizations must begin evaluating their AI systems and potential risk exposure today.
Proactive preparation will not only support compliance but also help build more reliable and ethically grounded AI systems. Those who act early will be better positioned to harness AI’s full potential securely and responsibly.