Cyberzoni - Iso 42001 Clause 6

ISO 42001 Clause 6 – Planning

Planning for AI Management (Risks, Impact & Objectives)

ISO/IEC 42001 Clause 6 Guidance

ISO 42001 Clause 6 includes requirements for identifying and evaluating AI risks, deciding how to treat those risks, performing impact assessments, and establishing AI objectives and plans to achieve them.

The goal of ISO/IEC 42001 Clause 6

Proactively address AI-related risks and opportunities and set clear objectives for trustworthy and responsible AI.

The goal is to make sure the AIMS achieves its intended results, mitigates undesirable effects, and continually improves.

Clause 6: Planning in ISO 42001

This clause is broken down into sub-clauses (6.1 through 6.3), each covering a key aspect of planning.

  • Set AI objectives that align with the AI policy and business goals.
  • Determine AI system risks, impacts, and opportunities and plan actions to address them.
  • Perform a thorough AI risk assessment and an AI impact assessment, going beyond traditional standards by examining potential effects on individuals and society.
  • Plan for risk treatment and changes in a controlled way, ensuring the AIMS remains effective and up-to-date.

Clause 6.1 – Actions to Address AI Risks and Opportunities

Clause 6.1 and its subparts guide organizations in systematically managing AI-related risks and opportunities as part of their planning process.

Your organization should consider its context (see Clause 4) and stakeholder requirements to pinpoint what could go right or wrong with its AI systems. By doing so, the organization aims to:

  • Achieve intended outcomes of the AI management system (make sure AI systems perform as expected and deliver value).
  • Prevent or reduce undesired effects (minimize negative outcomes like failures, biases, or breaches).
  • Pursue continual improvement in how AI is developed, used, and governed.

To support this, your organization should establish AI risk criteria – essentially defining its risk appetite. 

Cyberzoni - Insert Risk Appetite 1 To 25

Threshold between acceptable and unacceptable AI risks.

Cyberzoni - Risk Appetite Threshold Acceptable And Unacceptable

These criteria will guide consistent risk assessments and treatments. Risks and opportunities should be identified for each AI system (or groups of systems) considering the system’s domain, intended use, and the internal/external context.

The output of this planning is a set of actions to address the identified risks or opportunities, which must be integrated into the organization’s processes and later evaluated for effectiveness.

All decisions and actions (identified risks, chosen mitigations, etc.) should be documented as evidence of a proactive risk management process.

6.1.1 General (Risk Management Framework)

Establishes the overall approach to identifying and addressing AI risks and opportunities.

It sets the direction for risk planning, drawing on the organization’s context and AI use cases, and mandates creation of an “AI Risk Criteria” document defining how risks will be evaluated (e.g. scales for likelihood, impact on individuals/organization, risk levels, and what is considered acceptable risk).

This general subclause ensures that risk management is grounded in the specific context and domain of the AI system, and that the organization maintains documented information on the actions taken to identify AI risks and opportunities.

(For detailed guidance on AI risk management processes, organizations can refer to ISO/IEC 23894.)

Requires a defined, repeatable process for assessing AI risks.

The organization must identify potential risks associated with its AI systems, analyze those risks by estimating their potential consequences (to the organization, individuals, and society) and likelihood, and then determine risk levels based on the predefined criteria . The risk assessment process should produce consistent, valid results and consider the worst-case impacts – for example, using the outputs of any AI impact assessment (see 6.1.4) when evaluating consequences . All steps (risk identification, analysis, evaluation) and outcomes should be documented (e.g. in an AI risk register or similar record) and maintained as formal documented information. The goal is to have a clear, evidence-based understanding of which AI risks could prevent the organization from achieving its AI objectives or could harm stakeholders.

Based on the risk assessment results, the organization must select appropriate risk treatment options for each identified AI risk (for example: mitigate the risk, avoid it, transfer it, or accept it).

The standard requires defining an AI risk treatment process that determines what specific controls (safeguards or measures) are needed to implement the chosen treatment for each risk.

Crucially, the organization must compare the selected controls with the recommended controls listed in Annex A of ISO 42001 to ensure that no necessary control has been overlooked.

If any relevant controls from Annex A are not chosen, the organization should be prepared to justify their exclusion, and if additional controls (beyond Annex A) are needed, those should be identified and included as well.

The outputs of risk treatment planning include a documented Statement of Applicability (SoA) – a document listing all controls the organization has determined to implement (or exclude) and the justification for each decision – and an AI risk treatment plan.

The risk treatment plan should be approved by management, include acceptance of any residual risks, and be communicated within the organization (and to interested parties as appropriate).

In essence, Clause 6.1.3 ensures the organization develops a concrete action plan to mitigate AI risks using a comprehensive set of controls, aligning with Annex A’s guidance and any additional safeguards required.

(Annex B of ISO 42001 provides implementation guidance for these controls, supporting effective risk treatment.)

This subclause introduces a requirement unique to AI governance. The organization must establish a process to assess the potential impacts of its AI systems on individuals, groups, or society at large.

Unlike traditional risk assessment (which might focus on organizational impact), this AI system impact assessment takes a broader perspective: evaluating how the deployment, intended use, or even foreseeable misuse of an AI system could affect people (e.g. customers, users, or affected communities) and societal factors.

The assessment should consider the specific technical context of the AI system, its socio-economic context, and the legal jurisdictions in which it operates.

The results of the impact assessment must be documented, and importantly, fed back into the risk assessment and treatment process .

In other words, if the impact assessment reveals significant potential harms (such as biases, privacy infringements, safety risks, or ethical issues), those findings should directly inform how risks are evaluated and what controls or modifications are needed.

This creates a feedback loop ensuring that societal and human impacts are not overlooked in the organization’s risk management strategies.

(Guidance for performing AI impact assessments is included in ISO/IEC 42005, and some Annex A controls – e.g. control A.5 – also address impact assessment.)

Clause 6.2 – AI Objectives and Planning to Achieve Them

Clause 6.2 shifts focus to setting AI objectives and figuring out how to meet them. Just as with any management system (e.g. quality, information security), having clear objectives is crucial. Here, top management needs to establish objectives for the AI management system at relevant levels and functions of the organization.

Good AI objectives under ISO 42001 should meet several criteria: they must be consistent with the AI policy, be measurable (if feasible), consider all applicable requirements (legal, customer, etc.), be monitored and updated as needed, and be communicated to those who need to know them. Essentially, objectives could be targets related to AI system performance or governance – for example, an objective might be “Improve the fairness of our AI-driven loan approval system to reduce bias by 20% within one year” or “Achieve 100% compliance with our AI ethics checklist for all new AI projects”. Objectives might address areas like accuracy, transparency, bias reduction, safety, customer experience, or efficiency.

Once objectives are set, the organization must plan how to achieve them. This planning will typically address:

  • What will be done – the specific actions or projects to reach the objective (for example, retraining an AI model with more diverse data to improve fairness, or implementing a new monitoring tool).
  • What resources are required – the budget, personnel, technology, and other resources needed.
  • Who will be responsible – assign ownership for each objective or action plan to specific roles or teams.
  • When it will be completed – set deadlines or milestones to track progress.
  • How the results will be evaluated – decide how you will measure success (the metrics or KPIs) and how frequently you will check progress. This ties back to objectives being measurable and monitored.

It can be helpful to document this as an “action plan” or roadmap for each AI objective. For instance, if one objective is to enhance AI transparency, the plan might include actions like “research and deploy an explainability tool by Q3” or “conduct quarterly reviews of model decisions with an oversight committee”, with assigned owners and measurement criteria.

Objectives give direction to the AI management efforts and provide a basis for evaluating performance in Clause 9 (Performance evaluation). Keep objectives and plans as documented information as well, ISO auditors will expect to see evidence of AI objectives and the plans to reach them.

Clause 6.3 – Planning of Changes

Clause 6.3 deals with how an organization manages changes to the AIMS. Changes could include updates to processes, integrating new AI systems into scope, organizational changes that affect AI governance, or modifications prompted by incidents and improvements. The clause is brief but important: it says changes must be carried out in a planned manner.

In practice, planned change management means:

  • Evaluate the potential impact of a change before implementing it (so you don’t unintentionally introduce new risks or disrupt the AIMS).
  • Plan the steps needed to implement the change, including timelines and responsible persons. For example, if you decide to adopt a new data governance tool as part of your AI risk controls, plan how to roll it out and train staff.
  • Communicate the change to all relevant stakeholders (internally, and externally if needed) so that everyone knows what’s changing.
  • Update any documentation, procedures, or scope definitions that the change affects.
  • Monitor the change as it’s implemented to ensure it achieves the desired result and doesn’t cause unforeseen problems.

A key part of Clause 6.3 is ensuring changes are approved appropriately. Significant changes to the AIMS (like introducing a new high-risk AI system or altering risk criteria) should get management review and approval before execution. This ties in with the governance aspect. Leadership should be aware of and agree to major adjustments in the AI management approach.

Planning changes in a controlled way helps maintain consistency and stability in your management system even as you improve it. It prevents chaos that could come from ad-hoc or poorly managed changes.

Clause 6.3 is about having a mini “change management” process so the AIMS can evolve safely. This is very much in line with other ISO standards (like ISO 27001) which also require changes to be planned.

Scroll to Top