ISO 42001 Statement of Applicability (SoA) – Detailed Guide
In ISO 42001, the Statement of Applicability (SoA) outlines the specific Annex A controls your organization has chosen to implement (or omit) based on its AI risk assessment, along with reasons for those decisions. This document is crucial for ISO 42001 certification, as it demonstrates your organization’s commitment to ethical and compliant AI management by addressing key AI risks (e.g. bias, privacy, transparency) with appropriate controls.
In this Article
What is the SoA in ISO 42001 AIMS?
The Statement of Applicability (SoA) is a foundational document in the ISO 42001 Artificial Intelligence Management System (AIMS). It serves a similar purpose as the SoA in ISO 27001, listing all the controls relevant to the AI management system and indicating whether each control is applied or not, with justifications for any exclusions.
In ISO 42001, the SoA outlines the specific Annex A controls the organization has chosen to implement (or omit) based on its AI risk assessment, along with reasons for those decisions. This document is crucial for ISO 42001 certification, as it demonstrates the organization’s commitment to ethical and compliant AI management by addressing key AI risks (e.g. bias, privacy, transparency) with appropriate controls.
In essence, the SoA provides clarity and accountability: it links the organization’s AI risk profile to control measures, aligns AI practices with regulatory and societal expectations, and shows auditors/stakeholders that all relevant controls have been considered.
The SoA is also a living document – ISO 42001 expects it to be kept up to date as AI systems, risks, and regulations continue to mature, ensuring continuous alignment with the organization’s AI governance objectives.
Preparation Process for Creating an SoA
Preparing an ISO 42001 SoA involves a systematic process. Utilize the following steps and best practices when preparing for the ISO 42001 SoA:
Define Scope and Context
Begin by defining the scope of your AIMS – which AI systems, business units, and processes are covered. Clarify internal and external context and stakeholder requirements, since these will influence which controls are applicable (for example, AI used in one department vs. enterprise-wide). A well-defined scope ensures the SoA is focused and relevant.
Perform an AI Risk Assessment
Conduct a thorough risk assessment (e.g. an AI system impact assessment) to identify AI-specific risks and opportunities. This risk assessment is the foundation for control selection – ISO 42001 is risk-driven, meaning you choose controls based on which risks need treatment. Document the results, as they will justify why certain controls are needed to mitigate identified risks (such as bias in algorithms, privacy breaches, safety issues, etc.).
Review Annex A Controls
Consult Annex A of ISO 42001, which contains a list of reference controls for AI governance and risk management. Go through each control and determine its applicability to your organization. ISO 42001 does not mandate every Annex A control — you can choose those that address your risks and context. However, some controls (e.g. having an AI policy) are typically unavoidable because the main clauses of ISO 42001 require them in some form.
Select Controls and Justify Decisions
For each Annex A control, decide whether to implement it. If you include a control, note how you will implement it or its status (implemented, in progress, etc.). If you exclude a control (deem it not applicable), provide a clear justification (for instance, the risk the control addresses is not present in your operations or is mitigated by other measures). This risk-based justification is critical – ISO 42001 expects you to explain why each control choice was made.
Use a Structured Template
Document these decisions in a structured format. A best practice is to use an SoA template, often a table or spreadsheet, to ensure consistency. Include columns for the control reference (Annex A control number and name) and a brief description, whether it’s applicable (yes/no), the justification for inclusion or exclusion, implementation status (e.g. in place, partially in place, planned), and references to relevant policies or procedures (evidence of implementation). Using an Excel-based template with these fields helps cover all required details systematically.
Review and Approve
Have cross-functional stakeholders review the SoA. Involving cross-functional teams (e.g. AI developers, IT security, compliance officers, legal advisors) is a best practice to ensure all perspectives are considered. This collaboration helps verify that controls and justifications are sound from both technical and compliance standpoints. Senior management should approve the final SoA, as it reflects the organization’s risk treatment decisions and commitments.
Maintain and Update
Treat the SoA as a living document. ISO 42001 requires continual improvement, so update the SoA regularly – for example, when new AI risks are identified, controls change, or new regulations impose additional requirements. Regular updates keep the SoA aligned with the current state of AI systems and controls, which is essential for ongoing compliance. Establish a document control process for the SoA (versioning and revision history) so you can track changes over time.
Utilizing a template and robust documentation practices will make the preparation process smoother and ensure nothing is overlooked.
Controls per Theme in the ISO 42001 Framework
ISO 42001’s Annex A provides a catalogue of AI-specific controls, categorized into 9 Themes, that organizations can implement to manage AI risks. These controls cover a range of themes (similar to how ISO 27001 provides security controls) and address ethical, technical, and governance aspects of AI.
The 9 Themes are as follows:
A.2 AI Policies and Governance (3 Controls)
Organizations are expected to establish an AI policy endorsed by top management. This policy sets the direction for AI development and use, aligning AI initiatives with business objectives and ethical principles. It should also be kept up-to-date (periodically reviewed) to remain effective. A strong AI policy demonstrates leadership commitment and provides a framework for all other controls.
A.3 Internal Organization & Accountability (2 Controls)
This control ensures clear roles and responsibilities for AI governance. Companies must assign accountability for AI system oversight and designate roles (like AI system owners, risk managers, ethics committees, etc.). A.3 also requires establishing a process for reporting AI-related concerns (e.g. a way for staff or users to report issues or ethical concerns with AI systems). This promotes a culture of responsibility and feedback, crucial for trustworthy AI.
A.4 Resource Management for AI (5 Controls)
Ensures the organization accounts for all necessary resources to manage AI systems responsibly. This includes identifying and documenting data sources, tools and platforms, computing infrastructure, and human expertise needed across the AI system lifecycle. Proper resource management (e.g. having skilled personnel and adequate data quality tools) is vital to address AI risks effectively.
A.5 Assessing Impacts of AI Systems (4 Controls)
Focuses on conducting AI system impact assessments and documenting the results. Organizations must have processes to evaluate how AI systems could affect individuals, groups, or society at large throughout the AI lifecycle (covering potential harms like bias, privacy invasion, safety risks, etc.). By performing regular impact (risk) assessments and retaining evidence of them, organizations can anticipate negative outcomes and put in place mitigations – this control underpins the risk management aspect of AIMS.
A.6 AI System Lifecycle Controls (9 Controls)
Addresses the need for defined processes in AI system development and deployment. It calls for setting objectives for responsible AI development, and implementing controls in design, coding, testing, validation, and change management for AI systems. Essentially, A.6 integrates AI governance into each phase of the AI system’s lifecycle to ensure requirements (like transparency or fairness criteria) are built-in and verified before deployment.
A.7 Data Management for AI (5 Controls)
Focuses on controls related to data used by AI systems. Data is the backbone of AI; hence A.7 requires organizations to manage data quality, provenance (origins of data), and preparation processes. For example, ensuring datasets are representative to reduce bias, securing data privacy, and maintaining documentation of data lineage are part of this control. Good data governance directly affects AI outcomes, making this a critical area.
A.8 Transparency and Communication (4 Controls)
Ensures information for interested parties of AI systems is provided. This means organizations should communicate essential information about their AI systems to stakeholders, including users and external parties. Key aspects include supplying clear user guidelines, explaining the AI system’s capabilities and limitations, and having channels for users to report issues or incidents (like an AI output error or harm). A.8 promotes transparency and trust by making AI operations more understandable and accountable to those affected.
A.9 Use of AI Systems (3 Controls)
Covers the appropriate use and monitoring of AI systems. It requires defining what constitutes acceptable use of the AI (ensuring AI is used as intended and in line with documented purposes) and putting measures in place to prevent or detect misuse. For example, if an AI system is only approved for certain types of decisions, the organization should have controls to ensure it isn’t repurposed in a risky manner without oversight. This control helps keep AI applications within safe and ethical bounds.
A.10 Third-Party and Customer Relationships (3 Controls)
Addresses managing external relationships in the AI context. Organizations often rely on third-party AI services or provide AI outputs to customers. A.10 requires allocating responsibilities in such relationships (e.g. clearly defining who is responsible for which part of the AI system’s governance), and establishing processes for managing suppliers or partners involved in AI development. It also entails considering customer expectations and requirements (like contractual obligations around AI ethics or performance). This control ensures that outsourcing or collaboration does not create gaps in AI risk management – accountability is shared and communicated.
Each of these themes targets a specific dimension of AI risk or governance. The relevance of these controls lies in how they collectively ensure AI systems are responsible, transparent, safe, and aligned with both organizational goals and compliance requirements.
When preparing your SoA, you should address all Annex A controls and describe how each chosen control mitigates relevant AI risks.
ISO 42001 Compliance and Audit Considerations
An accurate and up-to-date SoA is invaluable for demonstrating compliance with ISO 42001 and other AI regulations.
It essentially maps your controls to the ISO 42001 requirements and ethical AI principles, showing auditors and regulators that you have a systematic approach to manage AI risks.
Regulatory Alignment
The SoA helps ensure your AI management practices align with emerging laws and guidelines (for example, the EU AI Act or national AI regulations).
With explicitly addressing controls for bias mitigation, privacy protection, safety, etc., the SoA serves as evidence that the organization meets societal and legal expectations for responsible AI.
If regulators inquire how you govern AI risks, the SoA can be used to illustrate the controls in place for each risk area (e.g. a control for algorithmic transparency can tie into compliance with a transparency requirement in law).
Essentially, the SoA supports compliance by making your AI governance measures transparent and traceable.
Internal and External Audits
Auditors treat the SoA as a primary document during ISO 42001 assessments. External certification auditors will review the SoA in detail to verify that all Annex A controls have been considered and that your justifications are sound.
In fact, for ISO 27001 (and similarly for 42001), the SoA is “the primary document” auditors use to examine your management system’s controls. In ISO 42001 audits, the SoA is used to plan what evidence the auditor needs to see for each implemented control, and to check that any omitted controls are truly not applicable.
Your ISO 42001 certificate and scope statement will even reference the SoA (including its version/date) as part of the formal certification scope, highlighting how central it is to compliance.
Likewise, internal audits (required by Clause 9.2 of ISO 42001) will use the SoA as a checklist to ensure the organization continues to meet all its control commitments.
Routine internal reviews of the SoA help catch any gaps or lapses in control implementation before an external audit occurs.
Audit Trail and Documentation
A well-prepared SoA strengthens your audit trail. Each control listed in the SoA should link to documentation or evidence (policy documents, procedures, technical standards, training records, etc.) that auditors can examine.
During an audit, you should be able to show these documents to prove that the control is not just listed, but actively implemented. Additionally, maintaining version history of the SoA (with dates of updates and approvals) is important – auditors may want to see that the SoA is kept current and that changes go through proper change control.
This demonstrates continuous improvement, a core requirement of ISO 42001, by showing how controls and applicability decisions have been updated over time in response to new risks or improvements.
Facilitating Risk-Based Audits
Because ISO 42001 emphasizes risk-based thinking, auditors will pay close attention to the rationales in your SoA.
Be prepared to discuss how each included control mitigates specific AI risks, and why any excluded control is not relevant due to your risk assessment. A strong SoA that clearly ties controls to identified risks will satisfy auditors that you haven’t arbitrarily skipped controls – instead, you’ve made informed decisions.
This risk-to-control mapping in the SoA can also streamline the audit: it helps auditors follow your logic and focus their evaluation on the most pertinent areas.
All in all
the SoA is a linchpin for both compliance and audits. It supports regulatory compliance by explicitly documenting controls for ethical AI, and it streamlines audits by providing a one-stop overview of your control environment.
Organizations should leverage the SoA as a tool to communicate their AI governance posture – to auditors, regulators, and even clients – thereby building trust that the AI systems are under robust control.
Regular audits (internal/external) and updates to the SoA ensure that this compliance posture is maintained and continuously improved over time.
SoA Examples and Excel Templates
Because the Statement of Applicability contains a lot of information in a structured format, many organizations use Excel or similar spreadsheet templates to create and maintain their SoA. A typical SoA template will list each Annex A control in rows, with multiple columns to capture the details described above (applicability, justification, implementation status, references, etc.). Using a template ensures consistency and makes it easier to update the SoA as things change.
ISO 42001 SoA Spreadsheet
A comprehensive Excel-based SoA tool (commercial product) that is 100% editable. It comes pre-populated with the full list of ISO 42001 controls and provides intuitive features like checkboxes for marking applicability (✓/✗), dropdown menus for selecting risk mitigation actions, and even a dashboard summary of control status.
This can save time by guiding you through the applicability assessment and giving a visual overview of in-scope vs. out-of-scope controls.
When using an Excel template, remember to align it to your context: fill in the justifications that reflect your risk assessment and link each control to your internal policies or procedures. A filled-out SoA can also serve as a useful reference for your team – it concisely shows all the AI controls your organization has committed to, which is great for internal awareness and training.
Wrapping up
Developing and maintaining a robust Statement of Applicability (SoA) under ISO 42001 is integral to an organization’s AI governance strategy.
With systematically identifying relevant risks, aligning each Annex A control with those risks, and documenting justifications for both included and excluded controls, the SoA becomes a living, practical guide that demonstrates a commitment to ethical and compliant AI practices.
It not only supports internal and external audits but also provides traceability for regulators and stakeholders, showing how AI systems meet emerging legal, ethical, and societal expectations.
Making use of spreadsheets or structured templates can streamline the process, ensure consistency, and facilitate updates as risks and regulations adapt—helping your organization maintain continuous improvement and continue your path to ISO 42001 certification.
FAQ
What is the SoA in ISO 42001?
It’s a document listing all relevant Annex A controls for an AI management system, noting if they’re applied or excluded, and why.
Why is the SoA important for certification?
It shows auditors how your organization addresses AI risks and justifies control choices, proving you meet ISO 42001 requirements.
How do we justify excluding a control?
Provide a clear reason tied to your risk assessment—e.g., the risk doesn’t exist or is covered by another measure.
Why must the SoA be updated regularly?
AI systems, risks, and regulations change over time, so the SoA needs continuous revisions to stay aligned and compliant.