ISO 42001:2023 Control 5.5
Explaining ISO 42001 Control 5.5 Assessing societal impacts of AI systems
ISO 42001 Control A.5.5 / Control B.5.5 requires an organization to assess and document how its AI technologies could affect society at large – both positively and negatively – throughout the AI system’s life cycle.
Control 5.5 Description
- The organization shall assessed document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycIe.
ISO 42001 Annex A.5
- Assessing impacts of AI systems
ISO 42001 Annex A.5.1 Objective
- To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.
ISO 42001 Annex B.5
- Assessing impacts of AI systems
ISO 42001 Annex B.5.1 Objective
- To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.
Abstract
Control 5.5 of ISO/IEC 42001 focuses on ensuring organizations proactively evaluate and address the societal impacts of their AI systems.
This control requires an organization to assess and document how its AI technologies could affect society at large – both positively and negatively – throughout the AI system’s life cycle.
The goal is to broaden the scope of AI risk management beyond technical and business considerations, incorporating factors like environmental sustainability, economic effects, public welfare, health and safety, culture, and ethics.
Objective of Control 5.5
The objective of Control 5.5 is to ensure that an organization systematically identifies, evaluates, and documents the potential societal impacts of its AI systems.
Rather than focusing solely on performance or profitability, this control urges organizations to consider how AI applications might influence external stakeholders and society as a whole.
Purpose of Control 5.5
The purpose of this control is to promote responsible AI by making organizations consciously weigh the broader consequences of their AI systems.
Through performing societal impact assessments, organizations can proactively mitigate negative outcomes (such as bias, inequality, or environmental damage) and enhance positive outcomes (such as improved access to services or environmental benefits).
This leads to multiple benefits: it protects the organization’s reputation and legal compliance by avoiding harmful incidents, builds trust among the public and stakeholders, and aligns AI development with ethical principles and regulatory expectations.
Key Areas of Societal Impact to Assess
Societal impacts of AI systems can vary widely depending on context and use case.
ISO 42001 highlights several key areas where AI may have significant societal implications.
Organizations should evaluate their AI systems for effects in each of the following areas.
Environmental Sustainability
Consider how the AI system affects natural resources and the environment.
For example, training and running AI models can be computationally intensive, consuming substantial energy and water and contributing to carbon emissions.
These impacts should be measured against the organization’s sustainability goals.
Conversely, AI can also be leveraged to improve environmental outcomes – for instance, AI-driven optimizations might reduce energy usage in buildings or cut transportation emissions.
Assessing environmental impact means examining both the carbon footprint and resource usage of AI systems and any beneficial applications of AI for environmental protection.
Economic Impact
Evaluate the AI system’s influence on economic opportunities and fairness.
This includes impacts on employment (does the AI automate tasks and potentially displace workers, or does it create new jobs and markets?), access to financial services (for example, is an AI-powered credit scoring tool unintentionally denying loans or insurance to certain groups?), and broader economic factors like taxation, trade, and commerce.
The organization should ask whether the AI system might contribute to economic inclusion and growth or inadvertently cause financial exclusion and inequality. Ensuring equitable access to the benefits of AI technologies is a key part of this assessment.
Government and Political Impact
Assess how the AI system might affect governance, public policy, and civic processes.
AI can have implications for legislative and regulatory processes (e.g. being used to analyze or even draft policies), and it can be misused to influence elections or public opinion.
A notable concern is the creation and spread of misinformation or propaganda using AI (for instance, deepfake videos or automated bots on social media), which can undermine democratic processes and national security.
If the organization’s AI could be used by governments or others in criminal justice or surveillance, consider the risk of biases and impacts on civil liberties.
Evaluating political and governance impacts means ensuring AI systems do not erode trust in institutions or contribute to social unrest.
Health and Safety
Determine how the AI system impacts human health, well-being, and safety.
In healthcare contexts, AI can be used for medical diagnosis, treatment recommendations, and managing healthcare resources – offering huge benefits like earlier disease detection or personalized treatment. However, mistakes or biases in these systems could lead to misdiagnosis or unequal treatment. Beyond healthcare, AI is increasingly present in safety-critical applications (for example, autonomous vehicles, drones, or AI assisting in industrial operations). Failures or malfunctions in such systems could result in injury or loss of life.
Organizations should assess both the positive potential (improved health outcomes, enhanced safety mechanisms) and negative risks (physical harms, psychological effects on users) associated with their AI, ensuring that adequate safety measures and fail-safes are in place.
Social, Cultural, and Ethical Impact
Examine how AI systems might affect societal norms, cultural values, and ethical standards.
AI-driven content and decisions can influence public perceptions and behavior. For instance, the spread of AI-generated misinformation can entrench social biases or stereotypes, potentially harming certain groups or altering cultural norms. AI systems might inadvertently discriminate or exclude based on race, gender, or other characteristics if they learn from biased historical data. On the other hand, AI could also be used to promote cultural inclusivity and accessibility (for example, language translation AI bridging communication gaps).
Organizations need to consider whether their AI respects societal values, protects individual rights and dignity, and fosters fairness and inclusivity. This involves looking at ethical questions: Is the AI decision-making transparent and explainable? Is it respectful of privacy and autonomy? Understanding the cultural context and values of the society in which the AI operates is crucial for this aspect of the impact assessment.
Methodologies for Societal Impact Assessment
Assessing societal impact of AI systems can be challenging, but a variety of methodologies and tools are available to carry out these evaluations. A robust societal impact assessment will often combine multiple approaches, both qualitative and quantitative, and should involve diverse perspectives.
Qualitative Assessment Techniques
These involve exploratory, descriptive analyses of potential impacts.
Organizations might conduct ethical workshops or scenario planning exercises, where teams imagine possible ways the AI could affect different stakeholder groups or social domains. Techniques like impact mapping or case studies can help flesh out scenarios of positive and negative outcomes. Engaging an ethics review board or multidisciplinary team (including social scientists, ethicists, community representatives, etc.) to discuss the AI system can surface insights beyond the technical viewpoint. Qualitative approaches are useful for capturing context-specific effects and ethical nuances that numbers alone might miss.
Quantitative Analysis and Metrics
Whenever possible, organizations should support their assessments with data and measurable indicators. This could include calculating the AI system’s environmental impact metrics (e.g. estimated carbon emissions, energy consumption), or analyzing system outputs for signs of bias or disparity (e.g. measuring error rates or decision outcomes across different demographic groups to detect unfair bias). Simulation modeling or statistical risk analysis can estimate the likelihood and severity of certain adverse events. For instance, one might use metrics like fairness scores, bias indexes, or safety incident rates to quantify aspects of societal impact. Quantitative approaches lend objectivity and help prioritize issues by severity or frequency.
Risk Mapping and Scenario Analysis
Borrowing from risk management practices, organizations can create risk maps specifically for societal impacts. This involves identifying potential undesirable scenarios (for example, “AI misinformation causes public panic” or “automated hiring tool rejects qualified candidates from a minority group”) and then mapping these scenarios in terms of their likelihood and impact severity. Such risk mapping helps in visualizing which societal harms are most critical to guard against. Scenario analysis further allows teams to walk through how an impact could occur, step by step, and what safeguards or responses exist at each step. By mapping out scenarios of misuse or failure, the organization can pinpoint where interventions (technical controls or policy measures) are needed to prevent or mitigate harm.
Stakeholder Engagement and Input
An important aspect of societal impact assessment is involving those who might be affected by or have insight into the AI system’s broader effects. This could mean conducting stakeholder interviews, surveys, or public consultations. For example, if an AI system is used in a community setting (like a city using AI for policing or resource allocation), getting feedback from community members, civil society organizations, or subject-matter experts can reveal concerns that the organization’s internal team might overlook. Incorporating stakeholder input ensures the assessment covers real-world perspectives and values, and it also builds trust by showing that the organization is listening to and addressing external viewpoints. In some cases, forming a multi-stakeholder advisory panel for major AI projects can institutionalize this input throughout the AI’s development and deployment.
Use of Frameworks and Checklists
There are emerging frameworks and tools dedicated to AI impact assessment (often called Algorithmic Impact Assessments or similar). These typically provide structured questionnaires or checklists to guide organizations through evaluating ethical and societal implications. For instance, such a framework may prompt the team to answer specific questions about privacy, fairness, accountability, environmental impact, etc., and document their answers and mitigation plans. Using a standardized checklist or template helps ensure consistency and thoroughness across different AI projects. It also produces documentation that can be reviewed by auditors or regulators to verify that societal impacts were duly considered. While ISO 42001 does not prescribe a particular format, it encourages organizations to adopt or develop methodologies that fit their context – whether that’s aligning with international best practices or tailoring an in-house impact assessment procedure.
Risk of Misuse and Historical Bias Amplification
A critical part of societal impact assessment is considering not just the intended use of an AI system, but also how the system could be misused, abused, or inadvertently reinforce existing societal biases. AI technologies might work as designed for their primary purpose, yet still have negative downstream effects due to malicious actors or entrenched biases in data and society.
Malicious Misuse (e.g. Election Manipulation and Misinformation)
AI systems can be weaponized by bad actors to inflict societal harm. A prominent example is the use of AI-generated content (like deepfake videos or AI-authored fake news) to sway political opinions, spread misinformation, or incite conflict. There have been instances of AI being used to influence election outcomes or public discourse by creating highly realistic false media that ordinary citizens may believe is true. This kind of misuse can undermine democratic processes and social stability. Organizations deploying AI, especially those dealing with content generation or dissemination, must consider the risk that their technology could be repurposed or abused in such a way. Mitigation might involve technical safeguards (for instance, watermarking AI-generated media, or building detectors for deepfakes) as well as usage policies and monitoring for signs of misuse in the wild.
Historical Bias Amplification (e.g. Financial Exclusion)
AI systems learn from data that often reflects historical and societal biases. Without careful checks, an AI can inadvertently perpetuate or even amplify discriminatory patterns. For example, if a banking AI system is trained on historical lending data where certain minority communities were unfairly denied loans, the AI might carry forward those biased patterns, effectively preventing those groups from accessing financial services even if they are creditworthy. This creates a cycle that reinforces historical inequalities. Similarly, hiring algorithms might favor resumes from majority groups if past hiring was biased, or healthcare AI might better serve populations that were more represented in clinical data. Organizations should scrutinize their AI models for biased outcomes and ask: Does our AI system unfairly disadvantage any group of people? If so, they need to retrain the model with more diverse data, adjust decision thresholds, or put human checks in place. The control encourages improving AI to address historical harms – meaning AI can also be a tool to counteract bias (for instance, by identifying bias in human decisions and suggesting fairer alternatives).
Bias in Critical Public Systems (e.g. Criminal Justice Bias)
In the public sector, AI is increasingly used for decisions in policing, courts, and government services. There have been notable cases where AI tools exhibited bias – for example, predictive policing algorithms focusing on neighborhoods with historically higher arrest rates (often correlating with minority communities), or judicial risk assessment tools that were found to give higher risk scores to minority defendants due to biased historical data. These biases can lead to inequitable treatment and distrust in institutions. Even if an organization is not a government body, if their AI product could be used by one (say a company providing facial recognition tech to law enforcement), they share responsibility in anticipating these impacts. The assessment should question how the AI might behave in a sensitive context and what safeguards (like rigorous bias testing, transparency about limitations, or requiring human oversight in final decisions) are needed to prevent injustice.
In all these cases, the organization is expected to “analyze how actors can misuse AI systems and how AI systems can reinforce unwanted historical social biases.” This involves both a security mindset (thinking like an attacker or malicious user to foresee abuse vectors) and a social justice mindset (understanding the historical context of the domain in which the AI operates).
Integration into AI Lifecycle
For societal impact assessment to be truly effective, it cannot be a one-off checkbox at the end of development. Instead, it should be integrated throughout the AI system’s lifecycle. ISO 42001 emphasizes assessing societal impacts “throughout their life cycle,” meaning at each phase the organization should account for and address these impacts.
Conception & Design
At the earliest stage, when defining the purpose and requirements of an AI system, teams should include societal impact criteria. This means thinking about questions like: Who could be affected by this system? What could go wrong in a worst-case scenario for society? Are there any ethical concerns or value conflicts with deploying this system? By incorporating such questions into the design requirements, the organization ensures that products are conceived with a responsible innovation mindset. Techniques like ethical impact brainstorming or preliminary impact assessments can be done when scoping the project. If certain potential impacts are deemed too risky, the project might be altered or safeguards planned from the start.
Development & Testing
During model development, data preparation, and system testing, societal impact considerations should guide technical choices. For instance, developers should follow “ethics by design” principles: selecting training data that is diverse and free of inappropriate bias where possible, and using algorithms that support fairness and privacy. The testing phase should include specific tests for societal impact issues – such as bias testing (checking model outputs across different groups), robustness testing (how the model handles malicious inputs or edge cases), and measuring resource consumption. If developing an autonomous vehicle AI, testing would involve safety scenarios to ensure it avoids accidents; if developing a content algorithm, testing might include whether it accidentally promotes extremist or false content. Any issues uncovered in testing should feed back into model improvement (e.g. retrain the model, add data for underrepresented cases, adjust threshold rules). Documentation of this phase should show that the team actively checked for and mitigated societal risks as they built the system.
Deployment & Release
Just before and during deployment, a more formal societal impact assessment or risk assessment is often conducted. At this stage, the AI system’s real-world context is clearer – who the end-users are, what environment it will run in, and how stakeholders will interact with it. The organization should conduct a final review of potential impacts (often combining results from design and testing phases) and ensure all necessary controls are in place. For example, if deploying a public-facing AI service, the organization might implement a monitoring system for misuse (to catch suspicious usage patterns), draft user guidelines or transparency disclosures (informing users that AI is being used and what it means), and set up incident response plans (how to react if something goes wrong, like a safety incident or public backlash). It’s also wise to launch AI systems incrementally or with a pilot phase, observing societal reactions and making adjustments before a full roll-out. At deployment, integration with human oversight mechanisms is crucial: e.g., ensure that for high-impact decisions, humans can review or override the AI if needed. The result of the deployment stage integration is that the AI system enters production with a clear understanding of its societal responsibilities and with guardrails activated.
Monitoring & Operation
Once the AI system is live and in use, the organization must continuously monitor its performance and effects on society. Control 5.5 doesn’t stop at launch; it requires ongoing vigilance. This could include tracking key indicators such as error rates, usage statistics across different demographics, environmental metrics (energy consumption in operation), and gathering feedback or complaints from users or affected communities. If the AI is involved in decisions about individuals (like lending or hiring), mechanisms should be in place for individuals to appeal or get explanations, which helps catch potential unfair impacts. Periodic audits or reviews (e.g. monthly or quarterly) might be scheduled to reassess the societal impact: Are there any new types of harm emerging? Has the context changed (new laws, new social concerns)? It’s also important to stay updated on external events – for instance, if similar AI systems elsewhere had incidents or if regulators issue new guidance on AI ethics, the organization should re-evaluate its system in that light. Monitoring stage integration ensures that any negative impact is detected early and corrective action (such as model retraining, adjusting algorithms, or even pulling the system offline temporarily) can be taken. Over time, the data gathered in monitoring can demonstrate whether the AI is actually delivering the anticipated benefits and minimal harms, closing the loop back to the initial goals.
Retirement & Transition
Eventually, AI systems may be updated, replaced, or retired. Even in this final phase, societal impact must be considered. For example, if an AI system that made important decisions is being phased out, how will the organization ensure continuity or handle any outstanding decisions? If a system had known issues, how are those addressed in its successor? Data and models from a decommissioned system might need to be disposed of or archived responsibly (consider privacy and security implications for society). Additionally, the organization should reflect on lessons learned: Were there any unanticipated societal impacts during the system’s life? How can those insights be applied to future projects? This ties into the continual improvement aspect of ISO 42001. By treating retirement as a stage where impact is reviewed (perhaps writing a “post-mortem” report on the system’s societal impact), the organization can improve its impact assessment methodologies for next time. In some cases, if the AI system had significant public-facing effects, communicating its retirement and any changes to stakeholders is also part of responsible practice (so that people aren’t left in the dark when an AI service they relied on changes or ends).
Integrating societal impact assessment into each phase ensures it’s not an afterthought but a built-in feature of the AI management process. This continuous integration aligns with the Plan-Do-Check-Act approach of management systems: Plan (design with impact in mind), Do (develop with controls), Check (assess impacts at deployment and monitor), Act (improve or rectify issues and update the system or process).
Challenges and Best Practices (Balancing Innovation vs. Impact Mitigation)
Implementing Control 5.5 comes with its share of challenges, particularly the need to balance rapid AI innovation with careful impact mitigation. Understanding these challenges and following best practices can help organizations effectively meet the control’s requirements without stifling progress. Here we outline some key challenges and corresponding best practices:
Key Challenges
Unpredictability of AI Impacts: AI systems, especially those based on machine learning, can behave in unexpected ways. It is not always obvious at design time what societal effects might emerge once the AI is deployed in the real world. Additionally, some impacts may be indirect or long-term, making them hard to foresee. This unpredictability can make teams feel uncertain about how to conduct a thorough assessment or may lead them to underestimate certain risks.
Resource and Expertise Constraints: Conducting a deep societal impact assessment requires time, interdisciplinary expertise, and sometimes new types of data or analysis that organizations haven’t collected before. Smaller organizations or fast-moving product teams may find it burdensome to allocate resources for extensive ethical reviews or stakeholder consultations. Without clear guidance, teams might struggle with how to do an impact assessment properly, potentially leading to a superficial check-the-box exercise.
Fear of Slowing Innovation: There can be internal resistance to rigorous impact assessments due to the perception that they delay launches or add bureaucracy. Innovators and product developers, eager to bring AI features to market, might worry that identifying too many potential issues will result in project hurdles or even cancellations. There’s a cultural challenge in some tech environments to integrate governance without quashing creative momentum.
Measuring Intangibles: Some societal impacts (like cultural change, public trust, or psychological well-being) are not easily quantifiable. Organizations may find it challenging to measure or compare these against more tangible business metrics. The lack of standardized metrics for things like “ethical risk” or “social value” can lead to uncertainty in decision-making – for instance, how much potential bias is “too much”? How do we know if our mitigation is sufficient to declare the AI launch is responsible?
Evolving Regulatory and Public Expectations: The landscape of AI ethics and regulation is rapidly evolving. What is acceptable today might not be acceptable tomorrow as laws (such as AI-specific regulations or data protection laws) and societal expectations shift. Organizations face the challenge of aiming at a moving target – they must anticipate future standards to some extent. Keeping up with and interpreting these external requirements (like new compliance rules for AI or public ethical norms) can be difficult, especially when they vary across jurisdictions.
Best Practices
Adopt a Risk-Based Approach: Not every AI project carries equal societal risk. A practical strategy is to calibrate the depth of impact assessment to the level of risk. For high-stakes AI (e.g. medical diagnosis, autonomous driving, loan approval systems), invest heavily in impact analysis and testing. For lower-risk AI (like a minor feature improvement or an AI that recommends movie titles), a lighter assessment may suffice. By tiering projects by risk level, organizations ensure that critical cases get the scrutiny they need, while lower-risk innovations can proceed with minimal friction. This helps maintain innovation speed where appropriate, and focuses mitigation efforts where they matter most.
Foster an Interdisciplinary Review Process: Create structures that bring diverse expertise into the AI development pipeline. For example, form an AI ethics committee or working group that includes data scientists, domain experts, legal/compliance officers, and representatives from affected stakeholder groups (or their advocates). Regular checkpoints with this group during development can catch potential societal issues early. Interdisciplinary collaboration ensures that decisions aren’t made in a vacuum; technical teams gain perspective on ethical and social implications, while ethicists and others learn about the technical constraints. This collaborative culture can turn responsible AI into a shared goal rather than an external imposition.
Embed Impact Assessment into Existing Workflows: To avoid the “add-on bureaucracy” problem, integrate societal impact checks into familiar processes. For instance, include an “ethical impact” review item in project stage-gate checklists or agile sprint planning. When writing user stories or requirements, add acceptance criteria related to societal impact (e.g. “model must be tested for bias on X data before release”). Use existing risk management and quality assurance frameworks, expanding their scope to cover ethical and societal criteria alongside traditional metrics. When impact assessment is part of the normal workflow, it feels less like a roadblock and more like an inherent aspect of quality control.
Provide Training and Guidance: Equip your teams with the knowledge and tools to conduct societal impact assessments confidently. This could involve training sessions on AI ethics, bias, and sustainability, so staff understand why these assessments matter and how to perform them. Providing concrete guidance – like templates (checklists, questionnaires) or case studies of past AI projects and their impacts – can demystify the process. If developers and product managers have a clear playbook for impact assessment, they’re more likely to embrace it. Some organizations even roll out internal certification programs or incentives for “Ethical AI Champions” to encourage leadership in this area.
Maintain Transparency and Document Decisions: Throughout the assessment process, document what potential issues were identified and what was decided (including trade-offs). If an organization chooses to proceed with an AI system despite known risks, it should record the rationale (perhaps the benefits were judged to outweigh the risks, and mitigation measures were put in place). This documentation is vital for accountability and continual improvement – it allows the organization to review outcomes later and see if their judgments were correct. Transparency can also extend externally: being open about the fact that you conduct societal impact assessments and, where appropriate, sharing summaries of your findings can build public trust. It shows the organization isn’t hiding problems and is committed to doing the right thing.
Iterate and Improve: Balancing innovation and impact is not a one-time task. Organizations should treat their approach to societal impact assessment as something that evolves. After each project or each incident (if something goes wrong), conduct a post-mortem or retrospective. What did we miss? What worked well? Feed those lessons into an updated policy or checklist for next time. Over time, patterns will emerge that make future assessments more efficient and more effective. Also, keep an eye on external developments – as new tools for bias detection or environmental impact measurement emerge, integrate them into your methodology. In essence, make continual improvement not just a principle for the AI system, but for the assessment process itself.
Supporting Templates from CyberZoni
Implementing Control 5.5 can be facilitated by using structured templates and tools that guide organizations through risk assessment, documentation, and compliance checking. We offer several ISO 42001 templates that are particularly useful in supporting this control.
ISO 42001 Risk Assessment Template: This template provides a standardized format for identifying and evaluating risks associated with AI systems. In the context of Control 5.5, the risk assessment template can be used to capture societal impact risks. For example, an organization can list potential negative impacts (such as “bias in loan approval model affects minority applicants” or “chatbot could produce inappropriate content”) and then assess their likelihood and impact severity.
ISO 42001 Statement of Applicability (SoA) Template: The Statement of Applicability is a document where an organization declares which controls from the standard are applicable to their AI management system and how they are implemented. The SoA template from the website would list all ISO 42001 controls (including Control 5.5) and provide space to describe the implementation status of each. For Control 5.5, the SoA allows the organization to affirm its commitment to assessing societal impacts and summarize what processes or measures are in place to fulfill this control. For instance, the organization might state that it has a procedure for AI Ethical Impact Assessment (document X, version Y) and it’s applied to all AI projects. Using the SoA template ensures nothing is overlooked – it prompts the organization to explicitly consider Control 5.5 and either mark it as “applicable (implemented)” or provide justification if it were not (though for most organizations it will be applicable). This is valuable for audits and internal clarity, as it links the high-level requirement to specific evidence or policies.
ISO 42001 Controls List – Implementation Guidance: This document is a comprehensive list of all controls in ISO 42001 along with guidance on how to implement each one. For Control 5.5, the implementation guidance section would elaborate on recommended practices (many of which we’ve discussed above, such as performing impact assessments in the design phase, involving stakeholders, documenting outcomes, etc.). The guidance acts as a blueprint for organizations that may be unsure how to get started with assessing societal impacts. It might include tips like “form a cross-functional impact assessment team” or “use checklist XYZ for ethical impact questions”. By referring to the Controls List guidance, an organization can benchmark its approach against best practices and ensure it covers all necessary steps. Essentially, this template turns the abstract requirement of the standard into concrete action items and examples, which simplifies the implementation of Control 5.5.
ISO 42001 Checklist – GAP Analysis: The GAP analysis checklist is a tool for evaluating an organization’s current compliance against the ISO 42001 requirements. It typically lists each control or clause and asks whether the requirement is met, partially met, or not met, and what evidence or actions are needed. Using this checklist for Control 5.5 will help an organization self-assess how well it is addressing societal impact assessment. For instance, the checklist might prompt questions like: “Has the organization identified potential societal impacts for all AI systems? Are these documented? Is there evidence of review throughout the lifecycle?” If the answer is “no” or uncertain, that indicates a gap to be filled. The GAP analysis template thus supports continuous improvement: it highlights areas where the organization’s current practice might fall short of the standard, allowing management to allocate resources or define projects to close those gaps. In the case of societal impacts, the checklist ensures the organization has, in fact, implemented a process to regularly do these assessments and is not missing key elements (such as covering all categories of impact or involving the right stakeholders).
ISO 42001 Internal Audit Checklist: As part of maintaining an AI management system, periodic internal audits are conducted to verify that controls are effectively implemented. The internal audit checklist template provides auditors (or self-auditors) with a structured way to inspect each control. For Control 5.5, an internal auditor would use this checklist to confirm things like: Is there a documented procedure for assessing societal impacts? Have impact assessments been carried out for a sample of AI projects? Do records show the impacts identified and actions taken? Are responsible personnel aware of this control? The checklist includes criteria or questions derived from the standard’s wording and intent. Following the internal audit checklist, the auditor can gather evidence (meeting minutes, assessment reports, risk registers, etc.) to verify compliance. The benefit of using the internal audit checklist is that it ensures no aspect of the control is overlooked during an audit – it brings a systematic approach to evaluating Control 5.5. Any findings from the audit (e.g., “impact assessment not done for Project X”) can then be addressed by management. Over time, this audit process helps keep the organization honest and consistent in applying the societal impact assessments, reinforcing the importance of Control 5.5 and catching lapses or areas for improvement.