Skip to main content
Back to Blog
Data Privacy5 min read26.03.2026Max Fey

EU AI Act 2026: What Companies Must Know Before August

Full AI Act obligations for high-risk AI take effect August 2026. Which systems are affected, what steps are needed now — and what fines are at stake.

EU AI Act compliance obligations have been rolling into force since August 2025 — and from August 2026, the strictest requirements for high-risk AI apply. What does this mean concretely for your organization?

What Is the EU AI Act?

The EU AI Act is the world's first comprehensive AI regulation. It classifies AI systems into four tiers based on risk potential: prohibited practices, high-risk AI, limited-risk systems, and minimal-risk systems. The regulation entered into force in August 2024 but was implemented in phases — with the most critical milestone on August 2, 2026, when full requirements for high-risk AI systems become mandatory.

For companies that deploy, develop, or distribute AI, this means: organizations that have not yet acted are running out of time.

Which AI Systems Are Classified as High-Risk?

The classification is critical. High-risk AI systems under the AI Act include, among others:

- **HR decisions:** AI for applicant screening, performance evaluation, or termination - **Creditworthiness:** Automated credit scoring and lending decisions - **Educational access:** AI systems for student evaluation at schools or universities - **Critical infrastructure:** AI in the control of electricity, water, or transport networks - **Law enforcement:** Biometric identification, risk profiling, or evidence evaluation - **Medical devices:** AI as a component of Class IIa and above medical devices

Many companies underestimate how broadly this definition extends. An automated screening algorithm in recruiting or an AI-powered credit scoring tool already falls under it — even if marketed as a simple software feature.

What Must Affected Companies Implement by August 2026?

The EU AI Act mandates a comprehensive set of obligations for high-risk AI systems:

### 1. Risk Management System

Companies must establish a documented risk management system covering the identification, analysis, and mitigation of risks throughout the AI system's entire lifecycle. This is not a one-time process — it is an ongoing obligation.

### 2. Technical Documentation

Before market entry, detailed technical documentation must be created. It describes the purpose, design, training data, performance limitations, and anticipated risks of the system. This documentation must be made available to authorities upon request.

### 3. Data Obligations

Training data must be checked for completeness, accuracy, and potential biases. Companies must be able to demonstrate that their AI systems do not produce discriminatory outcomes.

### 4. Transparency and Logging

High-risk systems must automatically generate logs enabling retrospective review of outputs. Decisions must be explainable to affected individuals.

### 5. Human Oversight

Effective human oversight is mandatory for all high-risk AI systems. This means: fully automated decisions with no possibility of human review are not permissible for high-risk applications.

### 6. CE Conformity Assessment

Anyone placing high-risk AI on the EU market must complete a conformity assessment procedure and affix the CE marking. For many systems, self-assessment is possible; for others, assessment by a notified body is required.

What Fines Are at Stake?

The penalties are substantial: violations of prohibitions (e.g., forbidden biometric surveillance) carry fines of up to 35 million euros or 7 percent of global annual turnover. Violations of high-risk system requirements can be sanctioned with up to 15 million euros or 3 percent of annual turnover. For comparison: GDPR fines cap at 20 million euros or 4 percent of turnover.

What Does the EU AI Act Mean for GDPR-Compliant AI?

The EU AI Act complements GDPR — it does not replace it. In practice, this creates a dual compliance requirement: organizations processing personal data in AI systems must fulfill both GDPR and AI Act obligations. In particular, the transparency and information obligations of both frameworks interact closely. European data protection authorities will likely operate jointly with the new national AI Act supervisory bodies.

Practical Immediate Measures for Companies

Organizations that act now still have sufficient time. The following steps should be initiated without delay:

1. **AI Inventory:** Create a complete list of all AI systems used in your organization — developed internally or purchased externally. 2. **Risk Classification:** Assess each system against the high-risk categories of the AI Act. When in doubt, seek legal counsel. 3. **Gap Analysis:** Compare the current state of your documentation, data practices, and governance processes against AI Act requirements. 4. **Assign Responsibilities:** Designate a responsible person or function for AI Act compliance in your organization. 5. **Vendor Review:** Request technical documentation and conformity evidence from AI software vendors.

Conclusion: Address the August Deadline Now

The EU AI Act is not a hypothetical future scenario — it is binding EU law with clear deadlines. Companies deploying or planning to use AI should begin systematic compliance review now at the latest. The good news: organizations that design AI to be privacy-compliant and transparent from the outset will already meet many AI Act requirements naturally. The path to compliance is not a regulatory obstacle but an opportunity to build trust with customers, partners, and regulators.

#EU AI Act#KI-Regulierung#Compliance#DSGVO#Datenschutz