Planning for AI Management (Risks, Impact & Objectives)
ISO/IEC 42001 Clause 6 – Planning (Guidance & Best Practices)
ISO/IEC 42001 Clause 6 focuses on planning within an AI Management System (AIMS).
It requires organizations to proactively identify and address AI-related risks and opportunities, perform thorough risk analyses (including unique AI impact assessments), and set clear AI objectives with plans to achieve them.
Navigate
ISO/IEC 42001
Templates & Tools
Clause 6: Planning Overview
Clause 6 ensures the AIMS is designed to achieve its intended results, prevent undesirable outcomes, and foster continual improvement in AI governance. This clause is divided into sub-clauses (6.1 through 6.3) covering distinct planning components:
- AI Risk Management (6.1): Determine the context-specific risks and opportunities of AI systems and plan actions to address them (includes risk identification, assessment, treatment, and impact assessment).
- AI Objectives (6.2): Establish measurable objectives for the AI management system (aligned with the AI policy and stakeholder requirements) and plan how to achieve these targets.
- Managing Changes (6.3): Plan and control any changes to the AI management system to ensure continuous effectiveness and no new risks are introduced.
Clause 6.1 – Actions to Address AI Risks and Opportunities
Clause 6.1 guides organizations in systematically managing AI-related risks and opportunities as part of planning.
When planning the AIMS, your organization should consider its context (Clause 4.1) and stakeholder requirements (Clause 4.2) to pinpoint what could go right or wrong with each AI system.
The aim is to:
(1) ensure the AI management system achieves its intended outcomes;
(2) prevent or reduce undesired effects (e.g. AI failures, biases, security breaches);
(3) drive continual improvement.
To support this, establish clear AI risk criteria (essentially defining your risk appetite) that distinguish acceptable vs. unacceptable risk levels. These criteria (e.g. scoring scales for likelihood and impact) provide a consistent basis for ongoing risk assessments and treatments.
Each AI system (or grouping of systems) should be evaluated in light of its domain, intended use, and the internal/external context. This ensures risks and opportunities are identified comprehensively for the scope of the AIMS. The organization must then plan actions to address the identified risks or opportunities and integrate those actions into its processes, making sure to later evaluate if they were effective. All decisions, risk criteria, identified risks, and planned responses should be documented as evidence of a proactive risk management process. (Notably, ISO/IEC 23894 provides additional guidance on AI risk management frameworks.)
6.1.1 General (Risk Management Framework)
Sub-clause 6.1.1 sets the overall approach for identifying and addressing AI risks and opportunities. It draws on the organization’s context and AI use cases to establish a risk management framework. A key task is creating documented “AI Risk Criteria” that define how risks will be evaluated – for example, setting scales for likelihood, impact (on the organization, individuals, society), defining risk levels, and clarifying what counts as an acceptable risk. This effectively formalizes the organization’s risk appetite and ensures everyone assesses risks using the same yardstick. By tailoring the risk criteria to the domain and context of your AI systems, you ground the risk management process in reality. Sub-clause 6.1.1 also requires maintaining documented information on the actions taken to identify risks and opportunities, to demonstrate a systematic, evidence-based approach.
6.1.2 AI Risk Assessment
Clause 6.1.2 requires the organization to define and implement a repeatable AI risk assessment process. This process must align with the AI policy and objectives (from Clause 5 and 6.2) and produce consistent, valid, and comparable results over time. The risk assessment should identify any risks that could aid or hinder achieving the AI objectives, then analyze those risks by estimating: (1) the potential consequences if the risk materializes (impacts on the organization, individuals, and society), (2) the realistic likelihood of occurrence, and (3) the resulting level of risk based on the predefined criteria. In practice, this means compiling a list of AI-specific risks (e.g. data bias, model drift, security vulnerabilities) and scoring them against your likelihood and impact scales to prioritize attention. The assessment should be thorough – for significant risks, consider worst-case impacts and leverage any AI impact assessment results (from 6.1.4) to inform the consequence evaluation. After analysis, the next step is to evaluate the risks: compare each risk’s level against your risk acceptance criteria to decide which risks are acceptable and which require treatment or controls. All steps of the risk assessment (identification, analysis, evaluation) and the findings must be documented – e.g. in an AI risk register or log – to ensure transparency and to support audits. Adopting general risk management best practices such as those in ISO 31000 can help structure this process.
6.1.3 AI Risk Treatment
Based on the risk assessment results, Clause 6.1.3 requires the organization to select appropriate risk treatment options for each identified AI risk.
In essence, for each significant risk you decide whether to mitigate (reduce likelihood or impact), avoid (stop the risky activity), transfer (e.g. insure or outsource), or accept the risk.
The standard mandates establishing a process to determine which controls are needed to implement the chosen treatment options.
Crucially, you must then compare your selected controls with the recommended controls in ISO 42001 Annex A to ensure no necessary control has been overlooked. Annex A of the standard provides a list of reference controls for AI risks – think of it as a checklist of best-practice safeguards.
If you decide not to implement a particular Annex A control, you should be prepared to justify its exclusion (e.g. explain why that risk isn’t applicable or is sufficiently mitigated by other means).
Likewise, if your risk treatment requires controls beyond those in Annex A, you should include them – the standard allows adding custom controls or those from other sources as needed.
The outputs of the risk treatment process are two key pieces of documentation.
First is the Statement of Applicability (SoA) – a document that lists all controls you have determined to implement (whether from Annex A or additional) and notes any Annex A controls you’ve excluded, with justification for each decision.
The SoA is central to ISO 42001 compliance; auditors will scrutinize it to ensure you have solid, risk-based reasons for every inclusion or exclusion.
Second is an AI Risk Treatment Plan, which details how the chosen controls will be implemented (covering what will be done, by whom, with what resources, and on what timeline).
This plan should be approved by management, include acceptance of any residual risks left after treatment, and be communicated within the organization (and to interested external parties, if appropriate).
6.1.4 AI System Impact Assessment
Sub-clause 6.1.4 introduces a requirement unique to AI governance: conducting an AI system impact assessment. This is a specialized analysis focusing on the potential consequences that the development, deployment, intended use, or even foreseeable misuse of an AI system could have on individuals or groups of people and on society at large. In other words, beyond just looking at how AI risks affect your organization’s objectives, you need to evaluate how your AI systems might impact external stakeholders – for example, could an AI cause harm or bias against certain groups, affect privacy rights, or have safety implications in society? The impact assessment should take into account the specific technical context of the AI system, its socio-economic context, and any relevant laws or regulations in the jurisdictions where the AI is used. The findings of this assessment must be documented, and importantly, fed back into the risk assessment process (6.1.2). This means if you discover significant potential impacts (say, an AI decision system could unfairly deny services to a vulnerable population), those insights should directly influence how you evaluate risk severity and what controls you implement in 6.1.3. Essentially, 6.1.4 creates a feedback loop ensuring that societal and human impacts are not overlooked in your AI risk management strategy. In some cases, you might conduct multiple impact assessments (e.g. a privacy impact assessment or a safety hazard analysis) as part of your overall AI risk management, especially for high-stakes AI applications. The standard suggests using discipline-specific impact assessments where appropriate (for instance, referring to ISO/IEC 42005 for guidance on AI impact assessment methodology). Where feasible, consider sharing key results of these impact assessments with relevant interested parties to maintain transparency and trust.
Clause 6.2 – AI Objectives and Planning to Achieve Them
Clause 6.2 shifts the focus to setting AI objectives and determining plans to achieve those objectives. Top management is responsible for establishing AI objectives at relevant functions and levels of the organization. These objectives must align with the AI policy (set in Clause 5) and any applicable requirements (laws, customer expectations, ethical guidelines), ensuring consistency with the organization’s commitments. Good AI objectives should also be specific and measurable, to allow effective monitoring of progress (e.g. “reduce false positives in the fraud detection AI by 15% in the next year”). Additionally, objectives need to be communicated to all relevant staff, periodically monitored, and updated when necessary to remain pertinent. The standard requires keeping these objectives as documented information, so you might maintain an “AI Objectives Register” or include them in your AI management plan.
When planning how to achieve the AI objectives, Clause 6.2 asks the organization to spell out: what will be done, what resources are required, who will be responsible, when it will be completed, and how the results will be evaluated. In practice, this means for each AI objective, you create a concrete action plan or roadmap. For example, if one objective is to improve the transparency of an AI system, the plan might include actions like adopting an explainability tool, conducting regular bias audits, assigning responsible AI leads for oversight, allocating budget for these initiatives, setting a timeline (e.g. quarterly milestones), and defining metrics to gauge success (such as an increase in the AI model’s interpretability score or stakeholder satisfaction). By clearly documenting how each objective will be achieved, you ensure that objectives aren’t just lofty ideals but translate into actionable projects. These plans should be revisited and adjusted as needed (for example, if resources change or if initial efforts aren’t meeting targets). Clause 6.2 effectively ties into Clause 9 (Performance evaluation) – the objectives set here will later be reviewed for progress, so planning them well sets the stage for meaningful performance monitoring. Make sure all AI objectives and their associated plans are kept as documented information, since auditors will expect to see evidence that you have defined objectives and a clear path to reach them.
Clause 6.3 – Planning of Changes
Clause 6.3 addresses how an organization manages changes to the AI Management System. Even after initial implementation, your AIMS will evolve – you might add new AI systems into scope, update processes based on new regulations or lessons learned, or make improvements to controls. Clause 6.3 requires that such changes be carried out in a planned manner. In practice, this means having a simple change management procedure in place for the AIMS. Before implementing a change, evaluate its potential impacts on your AI systems and controls so that you don’t inadvertently introduce new risks or compliance gaps. Plan out the change steps, including what needs to be done, what resources are needed, who will manage the change, and a timeline for the implementation. It’s also important to communicate the change to all relevant stakeholders – internally this could mean notifying the AI governance committee, affected development teams, or compliance officers; externally it might involve informing clients or regulators if the changes significantly affect them. Ensure you update any documentation (policies, procedures, risk assessments, system inventories, etc.) that the change impacts. During and after the change, monitor the outcomes to confirm the change achieved its intended purpose and did not cause any unexpected problems. Significant changes to the AIMS (for instance, bringing a high-risk AI application into scope or changing your risk assessment methodology) should receive proper management review and approval before execution. This oversight ensures leadership is aware of major adjustments and agrees they are acceptable. By treating changes in a controlled, planned way, Clause 6.3 helps maintain the integrity and effectiveness of the AI management system over time. This approach is very much in line with other ISO management system standards (like ISO 27001 for information security), which also emphasize planned change management to preserve system continuity. In summary, even as you improve or expand your AI capabilities, those modifications should be systematically evaluated and documented so the AIMS remains robust and up-to-date.
Implementation Best Practices for Clause 6
To successfully implement Clause 6 of ISO 42001, consider the following best practices and tips:
- Define Clear Risk Criteria: Develop a documented risk criteria framework (likelihood, impact scales, and risk tolerance thresholds) tailored to your AI context, so that all AI risk assessments use consistent standards. This ensures everyone knows what constitutes an acceptable risk versus one that demands action.
- Use a Cross-Functional Approach: Involve a cross-functional team in risk planning – include data scientists, engineers, compliance/legal, ethics officers, and business owners. Diverse perspectives help identify AI risks and opportunities that might be missed by one group alone.
- Document Everything: Maintain thorough documentation for all planning activities – context analysis, identified risks, risk assessment results, chosen controls, impact assessment findings, objectives, and change plans. This provides transparency and evidence of compliance. Regulators and auditors will expect to see a clear paper trail for how you addressed each requirement of Clause 6.
- Map Risks to Controls (Annex A Check): For each significant AI risk identified, map it to specific control measures. Cross-check your selected controls against ISO 42001’s Annex A control list to ensure you haven’t overlooked any important safeguards. This systematic comparison helps in covering all bases for risk mitigation.
- Prepare a Strong Statement of Applicability: When documenting your Statement of Applicability, include specific justifications for every control you implement or omit. Avoid generic phrases – tie each decision to your risk assessment findings and business context. A well-justified SoA will stand up to auditor scrutiny and demonstrates that your risk treatments are risk-driven.
- Integrate Impact Assessments: Make the AI system impact assessment an integral part of your risk management. Use its results to refine your risk analysis and treatment plans. For example, if an impact assessment reveals a high potential for societal harm, ensure your risk treatment plan addresses it with robust controls or design changes.
- Set SMART AI Objectives: Define AI objectives that are Specific, Measurable, Achievable, Relevant, and Time-bound, aligned with your AI policy. Clear objectives (e.g. performance or fairness targets for AI systems) give your team concrete goals and make it easier to track progress.
- Establish Objective Action Plans: For each AI objective, create a detailed action plan. Specify what steps will be taken, allocate adequate resources (budget, tools, personnel), assign responsible owners, set deadlines, and determine metrics for success. This ensures objectives are backed by actionable and accountable plans rather than vague intentions.
- Implement Change Control for AI Systems: Treat changes to your AI management processes or systems with formal change control. Before making a change, assess its impact on existing risks and controls, get necessary approvals (especially for major changes), and update all relevant documents. This disciplined approach prevents chaos and helps maintain continuous compliance even as you evolve.
- Review and Update Plans Regularly: ISO 42001 planning is not a one-time task. Schedule regular reviews of your risk assessments, treatment plans, impact assessments, and AI objectives – for example, quarterly or whenever a new AI project launches. This ongoing reassessment ensures your AIMS stays effective as AI systems and their contexts change. Continual monitoring and improvement will also prepare you for Clause 9 (performance evaluation) and Clause 10 (improvement) requirements down the line.