The 90-Day AI Governance Checklist: What Every Business Must Do Before the August 2026 EU AI Act Deadline
With the EU AI Act's full enforcement hitting in August 2026, businesses face penalties up to 35M EUR or 7% of global turnover. Here's your week-by-week compliance roadmap.
The 90-Day AI Governance Checklist: What Every Business Must Do Before the August 2026 EU AI Act Deadline
The clock is ticking. On August 2, 2026, the EU AI Act enters full enforcement for high-risk AI systems. Companies that fail to comply face fines of up to 35 million EUR or 7% of global annual turnover, whichever is higher. That is not a typo. Seven percent of global turnover makes GDPR's 4% penalty look modest.
Yet a 2026 PwC survey found that 61% of companies deploying AI in the EU have not completed their risk classification process. Another 44% have not even begun documenting their AI systems as required. If your organization falls into either category, you have roughly 90 days to get compliant.
This is not a theoretical overview. This is a week-by-week execution plan with role-specific action items, tool recommendations, and the exact documentation you need to produce.
Understanding the EU AI Act: What Changed and Why It Matters Now
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. While it was formally adopted in 2024, its enforcement timeline is phased:
- February 2025: Prohibitions on unacceptable-risk AI systems took effect
- August 2025: Requirements for general-purpose AI (GPAI) models began
- August 2026: Full enforcement for high-risk AI systems, the broadest and most impactful category
The August 2026 deadline is when the majority of businesses will feel the impact, because high-risk AI covers everything from hiring tools to credit scoring, medical devices, and critical infrastructure management.
Who Is Affected?
If your company does any of the following, you are in scope:
- Deploys AI systems within the EU market
- Provides AI products or services to EU-based customers
- Develops AI that is used by EU organizations, even if your company is headquartered elsewhere
- Uses AI outputs that affect EU citizens
The extraterritorial reach mirrors GDPR. If your AI touches EU citizens or markets, you comply or you pay.
The Four Risk Tiers: Classify Every AI System You Operate
The EU AI Act categorizes AI systems into four risk levels. Your first task is mapping every AI system in your organization to the correct tier.
Risk Classification Framework
| Risk Level | Description | Examples | Key Requirements |
|---|---|---|---|
| Unacceptable | Banned outright | Social scoring, real-time biometric surveillance (with exceptions), manipulative AI targeting vulnerabilities | Prohibition. Remove from service immediately. |
| High-Risk | Regulated with strict requirements | Hiring/recruitment AI, credit scoring, medical diagnostics, educational assessment, law enforcement tools, critical infrastructure AI | Conformity assessment, risk management, data governance, transparency, human oversight, accuracy/robustness requirements |
| Limited Risk | Transparency obligations | Chatbots, deepfake generators, emotion recognition systems | Disclosure that users are interacting with AI, labeling of AI-generated content |
| Minimal Risk | Largely unregulated | Spam filters, AI-powered games, inventory optimization | Voluntary codes of conduct encouraged |
High-Risk Categories (Annex III) in Detail
High-risk is where most compliance effort concentrates. The following areas trigger high-risk classification:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure (energy, transport, water, digital)
- Education and vocational training (admissions, assessments, proctoring)
- Employment, workers management, and access to self-employment (recruitment, task allocation, performance evaluation, termination decisions)
- Access to essential private and public services (credit scoring, insurance pricing, emergency dispatch prioritization)
- Law enforcement (risk assessment, polygraphs, evidence evaluation)
- Migration, asylum, and border control (risk assessment, document verification)
- Administration of justice and democratic processes (legal research tools, sentencing assistance)
If any AI system in your stack touches these domains, it is high-risk. No exceptions.
The Compliance Requirements: What You Must Document and Prove
For every high-risk AI system, the EU AI Act requires the following:
1. Risk Management System (Article 9)
A continuous, iterative process that includes:
- Identification and analysis of known and foreseeable risks
- Estimation and evaluation of risks from intended use and reasonably foreseeable misuse
- Risk mitigation measures
- Testing to ensure residual risks are acceptable
2. Data Governance (Article 10)
Training, validation, and testing datasets must meet specific quality criteria:
- Relevant, representative, and free from errors
- Appropriate statistical properties for the intended geography, context, and population
- Bias examination and mitigation documentation
3. Technical Documentation (Article 11)
Complete documentation covering:
- General system description and intended purpose
- Design specifications and development methodology
- Data requirements and data governance measures
- Monitoring, functioning, and control mechanisms
- Risk management documentation
- Change log and version history
4. Record-Keeping / Logging (Article 12)
Automatic logging of events throughout the system's lifecycle, including:
- Period of each use and reference database
- Input data for which the search has led to a match
- Identification of natural persons involved in verification of results
5. Transparency and Information to Deployers (Article 13)
Clear instructions of use for downstream deployers, covering:
- Provider identity and contact details
- System capabilities and limitations
- Intended purpose and foreseeable misuse scenarios
- Performance metrics (accuracy, robustness, cybersecurity)
- Human oversight measures
6. Human Oversight (Article 14)
AI systems must be designed to allow effective human oversight, including:
- The ability for humans to understand the system's capabilities and limitations
- Tools for humans to correctly interpret the AI system's output
- The ability to decide not to use the system or to override/reverse its output
- The ability to interrupt the system through a "stop" mechanism
7. Accuracy, Robustness, and Cybersecurity (Article 15)
Systems must achieve appropriate levels of:
- Accuracy and relevant accuracy metrics declared in instructions of use
- Robustness against errors, faults, and inconsistencies
- Resilience against unauthorized access and manipulation
Penalty Structure: The Cost of Non-Compliance
| Violation Type | Maximum Fine |
|---|---|
| Prohibited AI practices (unacceptable risk) | 35M EUR or 7% of global annual turnover |
| High-risk AI system non-compliance | 15M EUR or 3% of global annual turnover |
| Providing incorrect information to authorities | 7.5M EUR or 1.5% of global annual turnover |
| SME/startup reduced penalties | Proportionate reduction applies |
For context: a company with 2 billion EUR in annual global turnover faces a maximum fine of 140 million EUR for deploying a prohibited AI system. Even the lower-tier penalty for documentation failures could reach 30 million EUR.
The 90-Day Implementation Timeline
Here is your week-by-week roadmap from late May to August 2, 2026.
Phase 1: Discovery and Classification (Weeks 1-3)
Week 1: AI System Inventory
| Role | Action Item |
|---|---|
| CTO / VP Engineering | Commission a complete inventory of all AI systems, models, and AI-powered features across the organization. Include third-party AI tools and APIs. |
| Legal / DPO | Map each AI system to its data flows, identifying where EU personal data is processed. Cross-reference with existing GDPR ROPA (Records of Processing Activities). |
| Compliance Officer | Establish the AI governance working group. Secure executive sponsorship and budget allocation. |
| Product Managers | Document the intended purpose, user base, and deployment context for each AI-powered product or feature. |
Deliverable: Complete AI system register with owner, purpose, data flows, and deployment context for each system.
Week 2: Risk Classification
| Role | Action Item |
|---|---|
| CTO / VP Engineering | Classify each inventoried system against the four risk tiers. Flag all systems that may fall into high-risk categories under Annex III. |
| Legal / DPO | Review classifications for legal accuracy. Pay special attention to edge cases where AI is embedded in larger systems (e.g., an HR platform with AI-powered screening). |
| Compliance Officer | Produce the initial risk classification report. Identify gaps between current state and EU AI Act requirements for each high-risk system. |
Deliverable: Risk classification matrix with gap analysis for each high-risk system.
Week 3: Prioritization and Remediation Planning
| Role | Action Item |
|---|---|
| CTO / VP Engineering | Estimate engineering effort for each remediation item (logging, human oversight mechanisms, documentation). Identify any systems that should be decommissioned rather than brought into compliance. |
| Legal / DPO | Determine whether any systems fall into the unacceptable risk category and must be removed immediately. Review vendor contracts for third-party AI systems to assign compliance responsibilities. |
| Compliance Officer | Produce the prioritized remediation roadmap. Assign workstream owners. Establish weekly compliance review cadence. |
Deliverable: Prioritized remediation plan with resource allocation, timelines, and accountable owners.
Phase 2: Documentation and Technical Implementation (Weeks 4-9)
Week 4-5: Technical Documentation Sprint
For each high-risk AI system, produce:
- System description and design documentation
- Training data documentation (provenance, quality measures, bias assessments)
- Performance metrics and testing results
- Intended purpose and foreseeable misuse analysis
Week 5-6: Risk Management System Implementation
- Document the risk identification methodology
- Conduct formal risk assessments for each high-risk system
- Define and document mitigation measures
- Establish ongoing monitoring procedures
Week 6-7: Logging and Record-Keeping Infrastructure
- Implement automatic event logging for all high-risk systems
- Ensure logs capture the data points required by Article 12
- Validate log retention policies (minimum period as specified by the Act or national implementing legislation)
- Test log retrieval and audit trail capabilities
Week 7-8: Human Oversight Mechanisms
- Design and implement human-in-the-loop or human-on-the-loop controls for each high-risk system
- Build override and interrupt mechanisms
- Create operator training materials
- Document oversight procedures in the instructions of use
Week 8-9: Transparency and Deployer Information
- Produce instructions of use for each high-risk system
- Include all Article 13 required information
- Create end-user notifications where required (limited risk systems)
- Update privacy notices and terms of service
Phase 3: Validation and Readiness (Weeks 10-13)
Week 10-11: Internal Conformity Assessment
| Activity | Description |
|---|---|
| Documentation review | Independent review of all technical documentation for completeness |
| Testing validation | Verify that accuracy, robustness, and cybersecurity claims are supported by evidence |
| Bias audit | Conduct bias testing on high-risk systems, document results and mitigations |
| Human oversight testing | Simulate scenarios requiring human intervention, validate override mechanisms work correctly |
Week 11-12: External Review and Gap Remediation
- Engage external legal counsel or compliance consultants for an independent review
- For systems requiring third-party conformity assessment (e.g., biometric systems), engage a Notified Body
- Address any gaps identified during review
- Update documentation to reflect remediation actions
Week 12-13: Final Readiness
| Role | Action Item |
|---|---|
| CTO / VP Engineering | Sign off on technical readiness for all high-risk systems. Confirm logging, oversight, and documentation are complete. |
| Legal / DPO | Sign off on legal compliance. Confirm all documentation meets Article 11 requirements. Verify vendor compliance obligations are addressed. |
| Compliance Officer | Register high-risk systems in the EU AI database as required. Prepare incident response procedures for AI-related issues. Finalize the ongoing governance framework. |
| CEO / Board | Executive sign-off on the AI governance framework. Confirm organizational commitment and resource allocation for ongoing compliance. |
Deliverable: Compliance readiness report, EU AI database registrations, and ongoing governance framework.
Tool Recommendations for Compliance
The market for AI governance tooling has matured significantly. Here are the leading platforms by compliance function:
AI Inventory and Classification
| Tool | Strength | Best For |
|---|---|---|
| Holistic AI | Automated risk classification, bias auditing | Enterprises with large AI portfolios |
| Credo AI | Policy-to-code governance, regulatory mapping | Organizations needing automated compliance mapping |
| IBM OpenPages with Watson | Enterprise GRC integration, AI risk management | Companies already in the IBM ecosystem |
| OneTrust AI Governance | GDPR-to-AI-Act bridge, data mapping integration | Organizations with existing OneTrust deployments |
Technical Documentation
| Tool | Strength | Best For |
|---|---|---|
| Model Cards Toolkit (open source) | Standardized model documentation | Engineering teams needing a structured template |
| MLflow | Experiment tracking, model registry, artifact management | Teams already using MLOps pipelines |
| Weights & Biases | Experiment logging, dataset versioning | ML teams needing comprehensive tracking |
| DVC (Data Version Control) | Data lineage and versioning | Teams with complex data pipelines |
Bias Testing and Fairness
| Tool | Strength | Best For |
|---|---|---|
| Fairlearn (Microsoft, open source) | Fairness assessment and mitigation algorithms | Teams needing bias quantification |
| AI Fairness 360 (IBM, open source) | Comprehensive fairness metrics library | Research-oriented teams |
| Arthur AI | Production monitoring for bias drift | Teams needing real-time fairness monitoring |
| Fiddler AI | Explainability and fairness dashboards | Business stakeholders needing interpretable outputs |
Conformity Assessment Support
| Tool | Strength | Best For |
|---|---|---|
| TUV SUD AI Assessment | Notified Body services, certification | Systems requiring third-party conformity assessment |
| Bureau Veritas AI Certification | Industry-specific assessment frameworks | Regulated industry deployments |
| DNV AI Assurance | Risk-based assessment methodology | Critical infrastructure AI systems |
Common Compliance Pitfalls to Avoid
1. Ignoring Embedded AI
Many organizations undercount their AI systems because AI is embedded in third-party SaaS tools. Your CRM's lead scoring, your HR platform's resume screening, your customer service chatbot -- these all count. Audit your entire vendor stack.
2. Treating This as a One-Time Project
The EU AI Act requires ongoing compliance, not a one-time checklist. Risk management must be "continuous and iterative." You need standing governance processes, not a project team that disbands after August 2.
3. Underestimating Documentation Requirements
Article 11 technical documentation is extensive. A two-page model card will not suffice. Plan for detailed documentation covering design, development, testing, deployment, and monitoring for each high-risk system.
4. Neglecting the Supply Chain
If you use third-party AI models or systems, you need contractual assurances from your vendors. The Act assigns obligations across the value chain. As a deployer, you are responsible for using the system in accordance with the provider's instructions, but the provider is responsible for the conformity assessment and CE marking.
5. Assuming GDPR Compliance Is Sufficient
GDPR and the EU AI Act overlap but are distinct. GDPR governs personal data processing. The AI Act governs the AI system itself, including its accuracy, robustness, documentation, and human oversight requirements. You need compliance with both.
Building an Ongoing AI Governance Framework
Compliance by August 2026 is the immediate goal. But the real objective is building a sustainable AI governance framework that scales with your AI adoption.
Governance Structure
Establish a three-tier governance model:
- Board / Executive Level: AI ethics policy, risk appetite, strategic oversight
- AI Governance Committee: Cross-functional body (legal, engineering, compliance, business) that reviews new AI deployments, monitors compliance, and handles incident response
- Operational Level: Day-to-day compliance activities, documentation maintenance, monitoring, and reporting
Key Governance Processes
- AI Impact Assessment: Required before deploying any new AI system. Classifies risk, identifies compliance requirements, and determines resource needs.
- Ongoing Monitoring: Continuous monitoring of high-risk AI systems for performance drift, bias drift, and emerging risks.
- Incident Response: Defined procedures for AI-related incidents, including notification to competent authorities as required.
- Change Management: Any significant change to a high-risk AI system triggers a review of the conformity assessment and documentation.
- Vendor Management: Regular review of third-party AI providers for ongoing compliance.
Metrics to Track
| Metric | Target | Frequency |
|---|---|---|
| AI systems with complete risk classification | 100% | Monthly |
| High-risk systems with current technical documentation | 100% | Quarterly |
| Mean time to update documentation after system change | Under 30 days | Monthly |
| Bias audit completion rate | 100% of high-risk systems | Quarterly |
| Human oversight intervention response time | Under defined SLA | Monthly |
| Employee AI governance training completion | 100% of relevant roles | Annually |
International Implications: Beyond the EU
The EU AI Act is setting the global standard, much as GDPR did for data privacy. Businesses should anticipate:
- Brazil's AI Bill (PL 2338/2023): Moving through legislative process with significant EU AI Act alignment
- Canada's AIDA (Artificial Intelligence and Data Act): Expected to impose high-impact system requirements
- US Executive Order on AI Safety: While less prescriptive, federal procurement requirements and sector-specific rules are tightening
- China's AI regulations: Already enforce algorithm registration and deepfake labeling
- UK's AI framework: Pro-innovation but increasingly aligned with EU standards for market access
Companies that build compliance frameworks to the EU AI Act standard will find it significantly easier to adapt to other jurisdictions. Building to the strictest standard is a strategic advantage.
Conclusion: 90 Days Is Tight but Achievable
The August 2026 EU AI Act deadline is not optional, and the penalties are severe enough to demand executive attention. But 90 days is enough time if you start now and execute systematically.
The core sequence is straightforward: inventory your AI systems, classify their risk levels, produce the required documentation, implement the necessary technical controls, and validate your readiness. The complexity lies in the details and the coordination across legal, engineering, and business functions.
Do not wait for perfect conditions. Start with the inventory in Week 1. Every week of delay compresses your timeline and increases your risk.
The companies that treat this as a strategic investment in AI governance, rather than a compliance burden, will emerge with a competitive advantage. They will have the documentation, processes, and oversight mechanisms to deploy AI confidently, scale it responsibly, and operate in any regulatory environment worldwide.
Your 90 days start now.
Enjoyed this article? Share it with others.