ISO 42001 Assessing Impacts of AI Systems (A.5/B.5) Guidance & Best Practices
Clause B.5 of ISO/IEC 42001 focuses on ensuring organizations systematically evaluate the consequences that AI systems could have on people (individuals and groups) and on society as a whole.
Navigate
ISO/IEC 42001
Templates & Tools
A.5 / B.5 – Objective of Assessing AI System Impacts
Clause B.5 of ISO/IEC 42001 emphasizes proactive assessment across the AI system’s entire life cycle – from design and development through deployment and operation. With assessing AI impacts, organizations can anticipate and mitigate risks like bias, safety issues, or negative social outcomes before they escalate.
This domain includes one objective and four controls that guide setting up an impact assessment process, keeping thorough documentation, and specifically addressing impacts on individuals as well as broader societal implications.
A.5.1 / B.5.1 – Objective
The objective (A.5.1/B.5.1) is stated as: “To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.”
This means your organization should continually evaluate how its AI systems might affect people and communities from the moment of inception through to ongoing use.
In essence, responsible AI requires understanding who or what might be harmed or benefited by an AI system and taking action to address any significant impacts. This objective underpins the controls in this section, ensuring that trustworthiness, ethics, and human well-being remain central considerations as AI systems are developed and deployed.
A.5.2 / B.5.2 – AI System Impact Assessment Process (Control 5.2)
The organization should establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.
In other words, ISO 42001 expects a structured method for evaluating how an AI system might affect people and society at every stage, from design to decommissioning. Because AI systems can generate significant impacts, your process should be tailored to the AI’s intended purpose, context, complexity, and data sensitivity.
Always consider whether the AI system could influence someone’s legal status or life opportunities, their physical or psychological well-being, universal human rights, or have broader effects on society (e.g. culture or public welfare).
Implementation Guidance & Best Practices
When establishing an AI impact assessment process, incorporate the following best practices:
- Define when to perform an impact assessment: Set clear criteria triggering an AI System Impact Assessment. For example, require an assessment for:
- Critical use cases or contexts: If the AI system’s intended use is high-stakes (e.g. healthcare diagnoses, financial decisions) or if there are significant changes in its use or purpose.
- High complexity or autonomy: If the AI technology is especially complex or highly automated (or if its level of automation significantly increases, such as moving from a decision-support tool to fully autonomous operation).
- Sensitive data handling: If the AI system processes sensitive personal data or draws from sensitive data sources (or if there are major changes to the types/sources of data used).
- Include key steps in the assessment process: Your AI impact assessment procedure should cover several elements, mirroring a risk assessment approach:
- Identification: Identify potential sources of impact or risk – what events could occur and who might be affected. Determine possible outcomes (both negative and positive) of the AI system’s decisions or errors.
- Analysis: Analyze the identified impacts by considering their consequences and how likely they are to occur. For example, evaluate the severity of harm an erroneous AI decision could cause and the likelihood of that error. Consider the sensitivity of any data involved and privacy implications at this stage.
- Evaluation: Evaluate and prioritize the risks/impacts. Decide which potential impacts are acceptable and which are not. Set criteria or thresholds for risk acceptance, and determine which issues need mitigation or management. (This is essentially making an acceptance decision and prioritizing what to tackle first.)
- Treatment (Mitigation): For the higher-priority or unacceptable risks, plan and implement measures to mitigate them. This could include technical controls (like bias detection algorithms, more human oversight, safety fail-safes) or organizational controls (like training, process changes) to reduce negative consequences.
- Documentation, reporting, and communication: Document the assessment findings and decisions, and integrate them into your reporting channels. Ensure there are procedures to report and communicate significant risks or outcomes to relevant stakeholders (aligning with your organization’s communication processes in ISO 42001 Clause 7.4 and documentation requirements in Clause 7.5). For instance, if an impact assessment reveals a serious ethical risk, it might need to be communicated up the management chain or to an oversight committee (see also the Internal Organization control on reporting AI concerns, Control 3.3, for establishing such channels).
- Assign clear responsibility: Determine who will perform or review the AI system impact assessments. Assign roles such as an AI Risk Officer, AI Ethics Committee, or cross-functional team (including domain experts, data scientists, compliance officers, etc.) responsible for carrying out the assessments. Clearly defining responsibility ensures the process is actually followed and not overlooked.
- Utilize assessment results in decision-making: Establish how the impact assessment outcomes will be used. For example, integrate this process with your AI system design and deployment gates. High-risk findings might trigger a management review or approval before the AI system can progress (linking to controls in the AI system development domain (B.6) or AI system use domain (B.9)). Likewise, use the insights to refine system design or implement additional safeguards. The impact assessment shouldn’t be a one-off checkbox — it should actively inform how the AI is built and used.
- Consider the scope of impacted subjects: As part of the assessment, explicitly identify the individuals or groups and communities or societal segments that could be affected by the AI system. Based on the system’s intended purpose and characteristics, determine if specific groups (e.g. children, the elderly, people with disabilities, employees, customers, etc.) are likely to be impacted. This scoping helps ensure you evaluate particular needs or vulnerabilities of those groups. For instance, an AI system used in HR should consider impacts on job candidates or employees, while a public-facing AI service might consider impacts on different demographic groups in society.
Finally, remember that an AI impact assessment process can vary depending on your organization’s role and the industry or domain of application. It may be useful to integrate with existing risk management processes. If you already conduct privacy impact assessments, safety risk assessments, or information security risk analyses, leverage those frameworks and make sure AI-specific factors (like algorithmic bias, model failure modes, etc.) are included. In some cases, discipline-specific impact assessments (for privacy, security, environment, etc.) may cover certain AI impacts; however, you should verify that they sufficiently address AI-specific considerations. If not, adapt them or introduce a dedicated AI impact assessment.
(Tip: ISO/IEC 23894 provides guidance on performing impact analyses as part of risk management – it can be a helpful reference when designing your AI impact assessment process.)
(For a more detailed guide on this control, see ISO 42001 Control 5.2: AI System Impact Assessment Process.)
A.5.3 / B.5.3 – Documentation of AI System Impact Assessments (Control 5.3)
The organization should document the results of AI system impact assessments and retain those results for a defined period. In practice, this means every time you carry out an AI impact assessment, you should produce a record of what was evaluated, what potential impacts were identified, and what decisions or actions were taken. Maintaining this documentation is important for transparency and accountability – it demonstrates due diligence to regulators, stakeholders, or auditors and helps your team keep track of how AI risks are being managed over time. These records should be kept up to date (especially if things change) and stored for a period that meets any legal, regulatory, or business requirements (for example, following your company’s data retention policy or industry regulations).
Implementation Guidance & Best Practices
When documenting AI system impact assessments, include comprehensive information so that the records are useful for future reference and stakeholder communication. Key items to document include:
- Intended use and foreseeable misuse: Clearly state the AI system’s intended purpose, including its scope and context of use. Also capture any reasonably foreseeable ways the system could be misused or used outside its intended scope. (For example, if a facial recognition AI is intended for unlocking personal devices, a foreseeable misuse might be using it for mass surveillance without consent.)
- Identified impacts (positive and negative): Summarize the potential positive outcomes the AI system could enable (e.g. faster decisions, improved accuracy in diagnostics) as well as the negative impacts or risks identified (e.g. bias against a certain group, risk of incorrect prediction causing harm). Documenting both sides provides a balanced view of the AI’s implications for individuals and society.
- Predictable failure modes and mitigations: List any predictable failure scenarios of the AI system (how the system might fail or produce incorrect results) and what the potential impacts of those failures are. Importantly, note the measures taken to mitigate these failures. For instance, if a predictive model might occasionally yield false positives that deny someone a service, note that risk and the mitigation (such as having a human review those cases or setting conservative thresholds).
- Relevant demographic groups or stakeholders: Record which groups of people the system is intended for or most likely to impact. This can include demographic information (age group, vulnerable populations like children or elderly, specific communities, employees, etc.) or stakeholder categories (customers, end-users, members of the public). If the AI system is not universally applicable, note which demographics were considered in the assessment (e.g. the system is designed for adult users and may not work accurately for children).
- System complexity and nature: Document the complexity of the AI system and any attributes that affect its impact. For example, note if it’s a self-learning system (continuous learning), a black-box model with low explainability, or a simple rule-based system. Higher complexity or opacity might increase certain risks, and noting this helps contextualize the assessment results.
- Human oversight and control measures: Describe the role of humans in the operation and oversight of the AI system. This should include any human-in-the-loop processes, oversight mechanisms, or controls in place to prevent or correct negative outcomes (such as an operator who can intervene if the AI behaves unexpectedly, or periodic human review of decisions). Also mention tools or processes available for humans to monitor the AI’s performance and intervene (e.g. alerts when the AI’s confidence is low, a kill-switch or manual override procedure). Essentially, how are humans ensuring the AI stays within acceptable bounds?
- Workforce implications (employment and staff skills): Note any impact the AI system may have on the organization’s workforce and what is being done about it. This could include the need for training staff to use or oversee the AI system, changes in job roles (e.g. certain tasks becoming automated), or even measures to address potential job displacement. Also record if specialized skills are required to manage the AI and how the organization plans to maintain those skills (hiring or training strategies).
In addition to the content of the documentation, establish how long these records will be kept and how they will be updated. Many organizations align retention with their document management policies or legal requirements.
Example: you might decide to retain AI impact assessment reports for a certain number of years or for the life of the AI system plus some years after.
Also, set a practice of updating the documentation whenever there are significant changes to the AI system or its context that could alter its impacts (such as deploying the AI in a new region or population, or a major update to the algorithm). Well-maintained documentation can be very helpful in internal audits, management reviews, and in communicating with users or other interested parties about the AI system’s risks (for instance, informing users about known limitations or mitigations). In fact, these documented assessments can directly inform what information you communicate to users or external stakeholders about the AI (tying into Clause 8 requirements for information to interested parties).
(For a more detailed guide on this control, see ISO 42001 Control 5.3: Documentation of AI System Impact Assessments)
A.5.4 / B.5.4 – Assessing AI System Impact on Individuals or Groups of Individuals (Control 5.4)
The organization should assess and document the potential impacts of AI systems on individuals or groups of individuals throughout the system’s life cycle. This control zooms in on human-centric impacts – it ensures you specifically evaluate how your AI system could affect people at the individual or group level (as opposed to broader societal impact, which is covered in the next control). In implementing this, consider your organization’s AI governance principles, policies, and objectives related to trustworthiness. Individuals who use an AI system or whose personal data are processed by it will have certain expectations (e.g. fairness, privacy, safety), and there may be legal protections in place for them. Moreover, some groups – such as children, the elderly, people with disabilities, or employees in a workplace – might require special consideration because they can be more vulnerable or have specific rights. Your AI impact assessment should evaluate these expectations and needs, and you should plan measures to address them, ensuring the AI system does not violate the trust or rights of the people it affects.
Implementation Guidance & Best Practices
When assessing impacts on individuals or specific groups, it’s helpful to break down the analysis into key areas of impact that reflect principles of trustworthy AI. Consider at least the following areas (and document how the AI system fares in each):
- Fairness: Evaluate whether the AI system treats people equitably and without unlawful or unethical bias. Does it make decisions that could unfairly disadvantage any individual or group? For example, check for bias in how the AI’s model was trained (are certain demographics underrepresented or negatively skewed in the data?). Ensure that outcomes do not systematically favor or harm one group over another without justification.
- Accountability: Determine how accountability is addressed when the AI system impacts individuals. This includes having clarity on who is responsible if the AI causes harm or error. From the user’s perspective, there should be a way to query or challenge a decision. Internally, your organization should have assigned accountability for monitoring the AI’s effects on people and for taking action when issues arise.
- Transparency and Explainability: Consider how transparent the AI’s workings and decisions are to those affected. Can individuals understand that an AI is involved and get an explanation of how a particular decision was made about them? It’s important that impacted people are not left in the dark – for high-stakes decisions (like loan approvals or medical diagnoses), explainability is often crucial for trust. Plan what information can be shared with users or subjects about the AI system’s logic or criteria in an appropriate manner.
- Security and Privacy: Assess the AI system’s security measures and privacy protections as they relate to individuals. Does the system safeguard personal data and prevent unauthorized access or leaks? Security incidents or data breaches can directly harm individuals (through loss of privacy, identity theft, etc.). Also, consider if the AI’s use could infringe on someone’s privacy in less direct ways (for instance, an AI analyzing personal behavior might feel invasive even if technically allowed). Ensure compliance with data protection laws and that robust cybersecurity controls are in place to protect individuals’ information.
- Safety and Health: If the AI system can affect someone’s physical safety or health, these impacts are paramount to assess. This is obvious for AI in medical devices, autonomous vehicles, or industrial robots, but also consider psychological safety. Could the AI cause mental distress or stress (for example, a chatbot giving harmful advice)? Ensure you identify any scenario where the AI’s action or failure could lead to injury or adverse health outcomes and address those with safeguards or warnings.
- Financial Consequences: Analyze whether the AI system’s decisions could have financial impacts on individuals. For example, an AI that approves or denies loans, sets insurance premiums, or controls pricing could significantly affect a person’s finances. Unfair or erroneous outputs in these cases can cause monetary loss or opportunity loss. Make sure the system has checks to prevent or correct any financial harm to individuals (like an appeal process for an AI-driven decision).
- Accessibility: Consider whether the AI system is accessible to individuals with diverse needs, including people with disabilities. An AI application might be technically effective but unusable by a segment of the population (e.g. a visual AI system that isn’t designed for the visually impaired). Ensuring accessibility is part of ethical impact – it means the AI’s benefits (and its decision processes) are available and understandable to the people who are subject to or use it, regardless of their abilities.
- Human Rights: Reflect on any broader human rights implications of the AI’s use on individuals or groups. This ties together many of the above points but is worth explicit mention. Could the AI system impinge on rights such as freedom of expression, freedom from discrimination, or the right to privacy? For instance, AI used in surveillance might impact the right to privacy and free movement; AI used in content filtering might affect freedom of speech. Align your assessment with international human rights frameworks to ensure nothing the AI does would contravene those fundamental rights.
In conducting this individuals-focused impact assessment, engage with relevant experts or representatives as needed.
For example, if your AI tool affects healthcare decisions, consult healthcare professionals or patient advocacy groups; if it deals with kids, maybe consult child rights specialists. The goal is to fully understand the potential impacts from the perspective of those affected, and sometimes that means bringing in outside perspectives or subject matter expertise.
(For a more detailed guide on this control, see ISO 42001 Control 5.4: Assessing AI System Impact on Individuals)
A.5.5 / B.5.5 – Assessing Societal Impacts of AI Systems (Control 5.5)
The organization should assess and document the potential societal impacts of its AI systems throughout their life cycle. This control expands the lens to look at effects on society at large, which can be quite broad and complex. Societal impacts refer to how an AI system might influence social structures, communities, the environment, the economy, or culture beyond just the direct users. Depending on your context and the type of AI, these impacts can be highly beneficial (for example, AI optimizing traffic can reduce emissions and improve quality of life) or potentially detrimental (e.g. AI spreading misinformation could undermine public trust or democracy). Your organization is expected to think through these broader implications and not just the immediate business or user impact.
Implementation Guidance & Best Practices
When evaluating societal impact, consider multiple dimensions and document any significant findings or concerns in areas such as:
- Environmental Sustainability: Assess the AI system’s environmental footprint and contributions. On one side, AI model development and deployment can be resource-intensive – for instance, training large AI models can consume a lot of electricity and result in substantial greenhouse gas emissions. Consider the impact on natural resources, energy usage, and even electronic waste if specialized hardware is used. On the other side, see if the AI system contributes positively to environmental goals (perhaps the AI is aimed at reducing energy consumption elsewhere or aiding climate research). Weigh these factors and ensure the organization takes steps to minimize negative environmental impact (like using energy-efficient infrastructure or carbon offsetting if appropriate). Align this with your organization’s overall sustainability objectives.
- Economic and Financial Impact: Look at how the AI system might affect economic factors in society. This includes employment (is the AI likely to displace jobs, create new jobs, or change the nature of certain work? And what is being done to address that, such as retraining programs?), access to financial services (could the AI inadvertently deny certain groups loans or insurance, thus affecting economic equality?), and broader market effects (for example, could widespread use of this AI disrupt an industry or economic sector?). Also consider tax or commerce implications if relevant (like automation affecting tax bases or AI-driven markets changing trade patterns). The aim is to ensure the AI’s rollout considers economic fairness and doesn’t contribute to undue inequality or instability without mitigation plans.
- Government and Public Governance: AI systems can interact with governmental functions or public opinion. Examine whether your AI could impact areas like legislation or political processes, public administration, or justice systems. For example, AI used in social media or content creation might be leveraged to spread misinformation or deepfakes, influencing elections or public opinion – a serious societal harm. If your AI tools are used by government agencies (say, for predictive policing or welfare decisions), consider how biases or errors could lead to injustices or erosion of public trust in institutions. Also, consider national security if applicable: could the AI be misused in ways that threaten security (cyber-attacks using AI, etc.)? If risks exist, document them and consider safeguards, like building in misuse detection or cooperating with authorities on responsible use guidelines.
- Public Health and Safety: Some AI systems, even if not directly health-related, can have large-scale health or safety implications. For instance, an AI system controlling traffic signals impacts road safety; AI in industrial control can affect environmental health (preventing spills, etc.). Think about whether the AI could influence access to healthcare (like triage AIs, diagnostic AIs improving or accidentally limiting care), affect medical decisions and treatments (with both life-saving benefits and potential for harm if wrong), or create new safety risks (like autonomous vehicles or drones malfunctioning in public spaces). For societal impact, we not only look at one individual’s safety but the aggregate effect – e.g., if self-driving cars are widely adopted, what’s the net effect on road safety, and are there failure scenarios that could cause public harm? Ensure that these possibilities are assessed and that your organization has strategies to maximize societal benefits (like improved health outcomes) while minimizing risks (such as rigorous testing and oversight for safety-critical AI).
- Social Norms, Culture, and Values: AI systems can influence societal norms and cultural values in subtle or significant ways. For example, an AI content recommendation engine might shape public discourse or reinforce certain cultural biases. Consider if your AI could inadvertently perpetuate stereotypes or biases in society. Misinformation from AI (like deepfake media or generative AI producing false content) can erode trust in information sources and harm social cohesion. Alternatively, AI might be used to address historical social harms by identifying and correcting bias in decision processes. Analyze both sides: could bad actors misuse your AI to harm social cohesion or violate ethical norms? And can your AI be used to improve social outcomes (for instance, by making services more inclusive or helping historically underserved communities)? Document these cultural and societal implications and what is being done about them. For any negative potential, consider countermeasures – e.g., if your AI could be used to create deepfakes, perhaps incorporate watermarking or detection capabilities.
In evaluating societal impacts, it’s often useful to consult a diverse range of stakeholders – possibly including community representatives, ethicists, environmental experts, economists, etc., depending on which aspect is most pertinent. Societal-level effects can be complex and far-reaching, so getting a broad perspective will improve the quality of your assessment. It’s also important to consider misuse scenarios: how might someone deliberately use your AI system to cause societal harm, and how can you guard against that? Conversely, think about how your AI might help remedy existing societal issues or injustices if used responsibly.
Finally, tie your findings into your organization’s broader mission and strategy. For instance, if your company has environmental sustainability goals, ensure the AI’s environmental impact assessment aligns with those goals (you might decide not to deploy a model if its carbon footprint is too high, or invest in greener computing). If your industry is highly regulated for safety or fairness, make sure your societal impact documentation feeds into compliance and stakeholder communications.
(For a more detailed guide on this control, see ISO 42001 Control 5.5: Assessing Societal Impacts of AI Systems)
Conclusion A.5/B.5
Implementing the Assessing Impacts of AI Systems controls (Objective 5.1 and Controls 5.2 through 5.5) is vital for any organization aiming to deploy AI responsibly. These practices ensure that potential harms are identified and mitigated before AI systems cause real-world problems. They also help uncover positive opportunities – for instance, ways your AI could be adjusted to better serve users or benefit the community.
This set of controls builds trust and transparency into your AI management. Stakeholders – from end users and customers to regulators – can have greater confidence that your organization’s AI systems are safe, fair, and aligned with societal values. The documentation and insights gained from impact assessments will support clear communication about AI risks and safeguards, both internally and externally, fostering accountability.
Moreover, the focus on continual assessment throughout the AI life cycle means your organization stays proactive, adjusting to new information or context changes rather than being caught off-guard.