Skip to main content

ISO 42001 is the international standard for artificial intelligence management systems. Published in December 2023, it provides a framework for organisations developing, deploying, or using AI systems to do so responsibly—addressing risks while enabling innovation.

What ISO 42001 actually involves

The standard requires you to establish a management system specifically for AI, covering:

  • AI policy and objectives aligned with your organisation’s strategy and values
  • Risk assessment for AI systems including bias, explainability, and unintended consequences
  • Impact assessments for systems that affect individuals or society
  • Data governance ensuring quality, appropriateness, and provenance of training data
  • Human oversight with appropriate levels of human control over AI decisions
  • Transparency requirements documenting how systems work and their limitations
  • Monitoring and improvement tracking system performance and addressing issues

ISO 42001 follows the same high-level structure as ISO 27001 and other management system standards, making integration straightforward.

Who needs ISO 42001

ISO 42001 is relevant for organisations that:

  • Develop AI systems whether for internal use or as products
  • Deploy AI in decision-making particularly in regulated sectors or high-stakes contexts
  • Procure and integrate AI tools into business processes
  • Operate in regulated industries where AI governance expectations are emerging
  • Want to demonstrate responsible AI to customers, investors, or regulators

With the EU AI Act coming into force, ISO 42001 provides a practical framework for meeting regulatory expectations around AI governance.

How ISO 42001 relates to the EU AI Act

The EU AI Act creates legal obligations for organisations deploying AI in Europe. ISO 42001 doesn’t guarantee compliance, but it provides a framework that substantially supports it:

  • Risk-based approach aligns with the AI Act’s risk classification
  • Documentation requirements support conformity assessment needs
  • Human oversight provisions address AI Act requirements for high-risk systems
  • Quality management for training data matches AI Act expectations
  • Monitoring and logging support post-market surveillance obligations

Implementing ISO 42001 now positions you well for AI Act compliance as requirements become clearer.

How we can help

AI inventory and assessment

First, we need to understand what AI you have. We’ll help you:

  • Identify all AI systems in use or development
  • Classify systems by risk level
  • Assess current governance arrangements
  • Identify gaps against ISO 42001 requirements

Management system implementation

We’ll guide you through building an AI management system:

  • Policy development — AI policy, ethics principles, acceptable use
  • Risk assessment methodology — Processes for evaluating AI-specific risks including bias, fairness, and explainability
  • Impact assessment procedures — Framework for assessing effects on individuals and society
  • Data governance — Controls for training data quality, provenance, and appropriateness
  • Human oversight mechanisms — Defining when and how humans review AI outputs
  • Incident management — Procedures for AI-related issues and failures
  • Documentation requirements — Technical documentation, user information, and records

Integration with existing management systems

If you have ISO 27001 or other management systems, we’ll help create an integrated approach that avoids duplication while addressing AI-specific requirements.

Certification preparation

When you’re ready for certification:

  • Internal audit support
  • Management review facilitation
  • Documentation review
  • Audit readiness assessment

What to expect

Implementation timelines vary significantly depending on how many AI systems you have and their complexity. A typical first-time implementation takes 6-12 months.

Organisations with mature ISO 27001 systems can often accelerate this by building on existing processes.

Common questions

Do we need this if we’re just using AI tools, not developing them? Yes, if those tools are making or supporting decisions that matter. ISO 42001 covers AI users as well as developers. How you use AI tools, what oversight you maintain, and how you handle issues are all in scope.

What counts as AI under this standard? ISO 42001 uses a broad definition aligned with emerging regulations. If a system learns from data, makes predictions, or generates content, it’s likely in scope. This includes machine learning, generative AI, decision support systems, and automated decision-making.

How does this relate to ISO 27001? They’re complementary. ISO 27001 covers information security broadly; ISO 42001 addresses AI-specific risks that ISO 27001 wasn’t designed for—bias, explainability, societal impact. Many controls overlap, but ISO 42001 adds substantial AI-specific requirements.

Is certification available? Yes. Certification bodies are now offering ISO 42001 certification. As a new standard, the certification landscape is still developing, but accredited certification is available.

What about generative AI specifically? Generative AI systems (like large language models) are in scope. ISO 42001 addresses the specific risks of generative AI including content accuracy, potential misuse, and intellectual property considerations.

How do we handle AI we don’t fully understand? This is a key challenge ISO 42001 addresses. The standard requires you to document what you know about systems, acknowledge limitations, and implement appropriate oversight based on risk. Perfect understanding isn’t required—appropriate governance is.

Ready to discuss your requirements?

Let's have a conversation about how we can help your organisation.

Let's talk