The EU AI Act Is Law. Here's What It Actually Requires.
The EU AI Act bans social scoring and emotion detection at work, regulates high-risk AI, and applies worldwide. Full timeline, requirements, and penalty breakdown.
• The EU AI Act is the world's first comprehensive AI law, taking a risk-based approach to regulation
• Banned practices (social scoring, emotion recognition at work) took effect February 2, 2025
• GPAI model requirements (including for ChatGPT, Claude, Gemini) kicked in August 2, 2025
• Full high-risk AI system rules apply from August 2, 2026 — the biggest compliance deadline
• Penalties reach up to EUR 35 million or 7% of global annual turnover for violations
What's Inside
- What the EU AI Act Actually Regulates
- The Risk-Based Framework
- Banned AI Practices (Already in Effect)
- High-Risk AI: What Counts, What's Required
- General-Purpose AI Rules (ChatGPT, Claude, Gemini)
- The Full Compliance Timeline
- Penalties: What Non-Compliance Costs
- Who's Affected (It's Not Just EU Companies)
- What Companies Should Do Now
- FAQ
What the EU AI Act Actually Regulates
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive law specifically targeting artificial intelligence. The European Parliament approved it in March 2024 and it entered into force on August 1, 2024, with requirements rolling out in phases through 2027.
The core idea is straightforward: different AI applications carry different levels of risk, so they should face different levels of regulation. A chatbot that helps you draft emails doesn't need the same oversight as an AI system that determines whether someone gets a bank loan or a prison sentence.
This risk-based approach distinguishes the EU AI Act from other regulatory efforts. The US has taken a largely sector-specific approach with executive orders. China regulates AI through multiple overlapping rules. The EU chose a single, horizontal regulation that covers all AI applications across all industries.
The practical impact is significant. If you build, deploy, or use AI systems that serve anyone in the European Union — regardless of where your company is based — the AI Act applies to you.
The Risk-Based Framework
The Act sorts AI applications into four risk categories:
Unacceptable RiskBanned outright. Social scoring, real-time biometric surveillance (with narrow exceptions), manipulation of vulnerable groups.High RiskStrict requirements. Critical infrastructure, education, employment, law enforcement, migration, democratic processes.Limited RiskTransparency obligations. Chatbots must disclose they're AI. Deepfakes must be labeled. Emotion recognition systems must inform users.Minimal RiskNo restrictions. Spam filters, AI in video games, inventory management. The vast majority of AI applications.
Most AI applications fall into the minimal risk category and face no regulatory burden. The Act explicitly states that it aims not to over-regulate AI innovation — it targets the applications where failure carries serious consequences for individuals and society.
Banned AI Practices (Already in Effect)
Since February 2, 2025, the following AI practices are prohibited in the EU:
- Social scoring: Government systems that rate citizens' behavior and grant or restrict access to services based on that score (similar to China's social credit system).
- Emotion recognition in the workplace: AI that infers or predicts employee emotions in work settings. This includes tools that claim to detect stress, engagement, or dissatisfaction through facial analysis, voice patterns, or behavioral monitoring.
- Real-time biometric surveillance: Using AI for live facial recognition in public spaces is banned with very narrow exceptions (imminent terrorist threat, searching for victims of serious crimes). Even these exceptions require prior judicial authorization.
- Subliminal manipulation: AI designed to manipulate behavior beyond a person's conscious awareness in ways that cause harm.
- Exploitation of vulnerabilities: AI that exploits age, disability, or socioeconomic circumstances to distort decision-making.
- Predictive policing based on profiling: AI that assesses the risk of a person committing a crime based solely on profiling or personality traits.
- Untargeted facial recognition scraping: Building facial recognition databases by scraping images from the internet or CCTV footage without consent.
These prohibitions are absolute. No compliance framework can make these applications legal in the EU. Companies that were developing emotion detection tools for HR, for example, have had to discontinue those products for the European market entirely.
High-Risk AI: What Counts, What's Required
High-risk AI systems are where the bulk of the compliance work lives. An AI system is classified as high-risk if it's used as a safety component of a product covered by existing EU safety legislation (medical devices, vehicles, machinery), or if it falls into one of these categories in Annex III:
- Biometric identification and categorization
- Critical infrastructure management (energy, water, transport)
- Education and vocational training (scoring exams, admissions)
- Employment and worker management (CV screening, promotion decisions, task allocation)
- Access to essential services (credit scoring, insurance pricing, emergency services)
- Law enforcement (evidence evaluation, risk assessment)
- Migration and border control (visa applications, asylum claims)
- Democratic processes (AI used in election influence)
What High-Risk Providers Must Do
Starting August 2, 2026, providers of high-risk AI systems must comply with these requirements throughout the system's lifecycle:
| Requirement | What It Means in Practice |
|---|---|
| Risk Management System | Documented, continuous process for identifying, analyzing, and mitigating risks |
| Data Governance | Training data must be relevant, representative, and free from errors. Bias testing required. |
| Technical Documentation | Detailed docs covering system design, development, capabilities, and limitations |
| Record Keeping | Automatic logging of system operations for traceability and audit |
| Transparency | Clear instructions for deployers on intended use, limitations, and human oversight needs |
| Human Oversight | Design must enable effective human oversight during operation |
| Accuracy and Resilience | Appropriate levels of accuracy, tested against adversarial attacks and errors |
| Cybersecurity | Resilience against unauthorized access and data manipulation |
For companies building AI-powered hiring tools, credit scoring models, or infrastructure management systems, these requirements mean significant investment in documentation, testing, and monitoring infrastructure. The requirements aren't one-time checks — they apply continuously throughout the AI system's operational life.
General-Purpose AI Rules (ChatGPT, Claude, Gemini)
The Act creates a separate category for general-purpose AI (GPAI) models — the large language models behind ChatGPT, Claude, Gemini, and similar systems. These rules took effect August 2, 2025.
All GPAI providers must:
- Maintain and share detailed technical documentation
- Provide clear information to downstream deployers (companies that build applications using the model)
- Implement a copyright compliance policy, including respect for opt-out mechanisms from content creators
- Publish a sufficiently detailed summary of training data content
Systemic risk models — GPAI models trained with compute above a certain threshold (currently set at 10^25 FLOPS) — face additional obligations. This tier includes models like GPT-5, Claude's largest models, and Gemini Ultra. These providers must:
- Perform model evaluations including adversarial testing
- Assess and mitigate systemic risks
- Report serious incidents to the European AI Office
- Ensure adequate cybersecurity protections
OpenAI, Anthropic, Google, and Meta have all begun publishing compliance documentation and adjusting their model cards and terms of service for the European market. The practical impact for end users is minimal — you can still use these tools freely. The burden falls on the providers.
The Full Compliance Timeline
| Date | What Takes Effect | Who's Affected |
|---|---|---|
| Feb 2, 2025 | Banned practices prohibited + AI literacy requirement | All organizations using AI in the EU |
| Aug 2, 2025 | GPAI model obligations + governance structures + penalties | AI model providers (OpenAI, Anthropic, Google, etc.) |
| Aug 2, 2026 | High-risk AI system requirements (Annex III) + transparency rules | Companies deploying high-risk AI |
| Aug 2, 2027 | High-risk AI in products (Annex I) + remaining provisions | Manufacturers of AI-enabled products |
Useful Resources
Related Reading
Real AI Responses (Tested March 2026)
The August 2026 deadline is the most significant for most companies. If your organization uses AI for hiring, credit decisions, or any of the Annex III categories, you have until then to build the required documentation, testing, and monitoring frameworks.
Penalties: What Non-Compliance Costs
Prohibited AI PracticesEUR 35Mor 7% of global turnoverHigh-Risk Non-ComplianceEUR 15Mor 3% of global turnoverFalse InformationEUR 7.5Mor 1% of global turnover
These are maximum penalties, calculated on a "whichever is higher" basis between the fixed amount and the turnover percentage. For a company like Google with ~$300 billion in annual revenue, the 7% turnover calculation would mean fines up to $21 billion — enough to make even the largest tech companies take compliance seriously.
Beyond financial penalties, non-compliant AI systems can be ordered withdrawn from the EU market entirely. For companies that depend on the EU as a major market, this is arguably the more consequential risk.
Who's Affected (It's Not Just EU Companies)
The AI Act applies to:
- Providers (developers) of AI systems placed on the EU market — regardless of where they're based
- Deployers (users) of AI systems located in the EU
- Providers and deployers outside the EU whose AI system outputs are used within the EU
This extraterritorial scope mirrors the approach the EU took with GDPR. If you're a US or Asian company whose AI product serves European customers, you're subject to the AI Act. This has already prompted many non-EU companies to appoint EU-based compliance representatives and begin documentation processes.
Small and medium enterprises (SMEs) get some relief: the Act includes provisions for regulatory sandboxes where smaller companies can test AI systems under regulatory supervision before full compliance obligations kick in. Startups with fewer than 50 employees face reduced documentation requirements for high-risk systems.
What Companies Should Do Now
Whether you're building AI systems or deploying them, these steps are not optional if you operate in the EU market:
- AI inventory: Catalog every AI system you build, deploy, or use. Classify each by risk level. Many companies don't know all the places they use AI — third-party tools in HR, customer service, and marketing often include AI components.
- Role clarity: Determine whether you're a provider, deployer, or both for each system. The obligations differ significantly.
- Gap analysis: For high-risk systems, compare current documentation, testing, and monitoring against the Act's requirements. Identify what's missing.
- AI literacy training: Already required as of February 2025. Staff who develop or make decisions about AI systems must understand the basics of AI capabilities, limitations, and regulatory obligations.
- Governance structure: Designate internal responsibility for AI compliance. Larger organizations should consider an AI compliance officer role similar to DPOs under GDPR.
- Documentation framework: Start building the technical documentation, risk assessments, and testing protocols that high-risk provisions require. This takes months, not weeks.
The biggest mistake I see companies making is treating this as a legal problem only. The AI Act's requirements are fundamentally technical — data governance, bias testing, system logging, cybersecurity. Legal teams need to work closely with engineering teams from the start.
FAQ
Does the EU AI Act apply to companies outside Europe?
Yes. If your AI system's output is used within the EU, the Act applies regardless of where your company is headquartered. This extraterritorial scope is similar to GDPR's approach. US, UK, and Asian companies serving EU customers must comply.
Is using ChatGPT or Claude regulated under the AI Act?
Using Claude, ChatGPT, or similar tools for general tasks (writing, coding, research) falls under minimal risk and faces no regulatory burden. The obligations fall primarily on the model providers (OpenAI, Anthropic, Google), not end users. However, if you build a high-risk application using these models (such as an AI hiring tool powered by GPT), you become a deployer with specific compliance obligations.
What's the difference between the EU AI Act and GDPR?
GDPR regulates personal data processing. The AI Act regulates AI system behavior and deployment. They overlap when AI systems process personal data (which most do), meaning companies may need to comply with both simultaneously. The AI Act does not replace GDPR — it adds a separate layer of requirements.
Are open-source AI models exempt?
Partially. Open-source GPAI models have reduced obligations regarding documentation and transparency. However, if an open-source model is classified as systemic risk (above the compute threshold), the full GPAI obligations apply. And any high-risk application built with open-source models still faces the same compliance requirements regardless of the model's license.
How does the EU AI Act compare to US AI regulation?
The US currently lacks a single comprehensive AI law. Instead, it relies on sector-specific regulations, executive orders, and voluntary commitments from AI companies. The EU's approach is more prescriptive and enforcement-oriented. Some US states (Colorado, California) have passed their own AI laws, but nothing matching the EU AI Act's scope. For companies operating globally, the EU AI Act effectively sets the floor for compliance standards.
The Big Picture
The EU AI Act matters beyond Europe's borders. Like GDPR before it, the AI Act is likely to influence AI regulation worldwide. Companies that build compliance frameworks now will be better positioned when other jurisdictions follow the EU's lead — and many are already drafting similar legislation.
For companies building AI products, the Act changes the calculus on what you can ship and how. For companies using AI tools, it requires a level of awareness about what those tools do and what risks they carry. For everyone, it represents the beginning of AI moving from an unregulated Wild West to a governed industry — with all the friction and stability that governance brings.