Complying with the EU AI Act: The Internal Auditor’s Role

Complying with the EU AI Act: The Internal Auditor’s Role

Access our free Reputational Risk Rapid Response Toolkit ongoing insights, expert frameworks, CPE opportunities, and industry connections by subscribing to The Risk Register.

Where emerging risks meet actionable insights.

Subscribe here for FREE

The EU Artificial Intelligence Act, formally adopted in 2024, represents a global milestone. It is the world’s first comprehensive legislation focused entirely on the regulation of Artificial Intelligence. The law sets mandatory requirements for developers, users, importers, and distributors of AI systems that affect the EU market or EU citizens, regardless of whether the company is based in the EU.

For internal auditors, this is not just another compliance topic. The law imposes fines of up to €35 million or 7 percent of global turnover, placing AI governance firmly on the audit agenda.

Why the EU AI Act Matters for Internal Audit

Internal auditors are uniquely positioned to assess organizational readiness, promote accountability, and support long-term compliance. The risks are not hypothetical. Non-compliance carries legal, financial, and reputational consequences that demand immediate attention.

Key Compliance Milestones

Understanding the enforcement timeline is essential. Here are the critical dates:

DateObligation or Milestone
August 1, 2024Regulation entered into force
February 2, 2025Ban on prohibited AI practices takes effect; literacy obligations begin
August 2, 2025Compliance requirements begin for general-purpose AI systems
August 2, 2026Full enforcement for high-risk AI systems

What this means: The real compliance deadline for high-risk AI systems is August 2, 2026. There are no grace periods or extensions expected.

AI System Risk Categories Explained

The EU AI Act categorizes AI systems into four levels of risk. Internal auditors must understand how systems across the enterprise are classified.

1. Prohibited AI Practices

These systems are banned entirely beginning February 2025. Examples include:

  • Social scoring by governments
  • AI that manipulates vulnerable individuals (such as children or persons with disabilities)
  • Real-time biometric identification in public spaces (with limited exceptions)

2. High-Risk AI Systems

These include AI used in critical sectors such as:

  • Biometric identification and facial recognition
  • Healthcare and medical diagnostics
  • Education and examination
  • Employment screening and HR decision-making
  • Financial services and credit scoring
  • Law enforcement, immigration, and justice

Obligations for these systems include:

  • Risk management processes and impact assessments
  • High-quality data governance
  • Robust documentation and technical record-keeping
  • Human oversight mechanisms
  • Cybersecurity and resilience controls
  • Registration in the EU high-risk AI database

3. Limited-Risk Systems

These systems, such as chatbots and emotion detection tools, must provide transparency to users. For example, users should be notified when they are interacting with AI.

4. Minimal-Risk Systems

Examples include spam filters and AI used in video games. These are subject only to voluntary codes of conduct.

Penalties for Non-Compliance

The EU AI Act outlines tiered penalties:

  • Up to €35 million or 7 percent of global turnover for prohibited AI uses
  • Up to €15 million or 3 percent for high-risk system violations
  • Up to €7.5 million or 1 percent for misleading information provided to regulators

How Internal Audit Can Support Compliance

Internal auditors can play a decisive role in AI governance and regulatory readiness. Here are practical steps to take:

Establish AI Governance

  • Confirm that AI policies are in place and aligned with regulatory expectations
  • Ensure oversight is embedded at the Board and executive levels

Inventory and Classify AI Systems

  • Identify all AI systems in use or under development
  • Map each system to its corresponding risk level under the EU AI Act

Assess Risks and Test Controls

  • Review risk assessments and mitigation strategies for high-risk systems
  • Test controls related to data sourcing, model validation, bias testing, and explainability
  • Evaluate documentation standards for traceability and reproducibility

Audit Third-Party Risk

  • Review vendor and service provider contracts for AI-related compliance terms
  • Verify that third-party tools meet EU requirements for data handling, transparency, and risk management

Evaluate Data Governance

  • Assess data quality controls and data sourcing practices
  • Confirm that training data is complete, representative, and ethically sourced

Verify Transparency Measures

  • Ensure that users are properly informed when interacting with AI systems
  • Audit processes for handling user feedback, complaints, or rights to challenge decisions

Monitor AI Incidents and Response Plans

  • Review incident response procedures specific to AI system failures
  • Confirm mechanisms for reporting, escalation, and regulatory notification are in place

Tools That Can Support Internal Audit

To enhance efficiency and coverage, internal audit teams should consider the following technologies:

  • AI inventory platforms to track systems and associated risks
  • Model governance tools to monitor performance, bias, and approvals
  • Bias detection software to flag discriminatory outcomes
  • Regulatory tracking systems to monitor updates from EU authorities
  • Audit trail tools to document AI decisions and system changes

These solutions can improve auditability and help meet the transparency and accountability requirements of the law.

Internal Audit Action Plan

To prepare for full enforcement in 2026, internal auditors should take the following steps now:

  • Begin mapping all AI systems and projects
  • Educate leadership on the timeline and impact of the EU AI Act
  • Include AI in audit plans for risk and compliance reviews
  • Collaborate with legal, compliance, and IT teams to review governance structures
  • Recommend tools that enable monitoring and documentation
  • Prepare for regulatory inspection or audit readiness reviews

Final Thoughts

August 2, 2026 is not a suggested target. It is a binding compliance deadline for high-risk AI systems. Internal auditors bring the necessary skills, independence, and enterprise-wide visibility to help their organizations meet this challenge.

Early action will not only reduce regulatory and operational risk. It will also strengthen stakeholder trust in how the organization governs its use of artificial intelligence.

The time to act is now.

Stay connected: follow us on LinkedIn and explore more at www.CherryHillAdvisory.com.

Read more