ISO 42005 §5.1 — General:
Establishing a Structured AI System Impact Assessment Process
Clause 5.1 asks you to design and use a consistent approach for assessing the impacts and risks of your AI systems.
The exact steps can vary, but your approach should reflect your organizational context and risk appetite, the intended use of each AI system, and the external environment (laws, regulator stances, cultural expectations, and market trends).
What 5.1 expects (at a glance)
- A documented, repeatable process to assess AI impacts/risks.
- Specifying based on internal factors (governance, policies, objectives, contracts, intended use, risk appetite).
- Attention to external factors (legal requirements and prohibitions, regulator guidance, incentives/penalties, culture/ethics, competitive trends).
- Awareness that Clause 5 (as a whole) lays out elements you can include in your assessment process.
Implementation guidance
Define triggers
Run an AI impact assessment (AIIA) when any of the following occur:
- New AI system or significant feature
- Major model change, retraining, or prompt/policy shift
- New data sources
- New users or use cases
- Repurposing
- Geographic expansion
- High‑risk integrations
- Post‑incident review
Choose the method & scale
Adopt a tiered approach:
- Screening (quick questionnaire) → route low‑risk systems to lightweight controls; medium/high to full AIIA.
- Full AIIA includes context, risk analysis, mitigations, sign‑off, and monitoring plan.
Assign roles
Define who drafts, who reviews, and who approves.
- AIIA Owner (product/AI lead)
- Risk/Compliance
- Legal/Privacy
- Security
- Data/Model lead
- Responsible AI/Ethics
- Business sponsor
Gather internal context
- Organizational governance, objectives, policies/procedures
- Contractual obligations
- Intended use and users of the AI system
- Declared risk appetite and criticality
Map external context
- Applicable laws and any prohibited uses
- Regulator policies/guidance and enforcement posture
- Incentives/penalties tied to the intended use (e.g., sector rules)
- Cultural norms, values, ethics around the use case
- Competitive landscape and product/service trends using AI
Analyze risks & impacts
Consider safety, security, privacy, bias/fairness, explainability, robustness, IP/contract issues, human oversight, environmental and societal impact.
Decide & document
- Residual risk vs. risk appetite
- Go/no‑go or go‑with‑mitigations
- Commit to controls, owners, due dates, and acceptance of residual risk.
Approval & recordkeeping
- Formal sign‑off (names, dates, versions).
- Store assessments, evidence, and decision logs in a durable system.
Operationalize monitoring
- Define KPIs/KRIs (e.g., model performance, drift, incident rates).
- Set review cadence and re‑assessment triggers.
Internal vs external factors (quick reference)
Internal vs external factors (quick reference)
Examples to capture in your AIIA
Internal |
|
External |
|
Evidence & artifacts to keep
Evidence & artifacts to keep
- AIIA screening form + full assessment report
- Context pack (use case description, data lineage, model card/summary)
- Legal/privacy mapping (e.g., DPIA link if applicable)
- Risk register entries & mitigation plan
- Approval record (who accepted what residual risk and when)
- Monitoring plan and review logs
Example
Use case: Customer‑support assistant that drafts responses.
- Internal: High volume, medium business criticality; policy requires human‑in‑the‑loop; moderate risk appetite.
- External: Consumer‑protection and privacy laws apply; regulator guidance stresses transparency; strong cultural expectation of non‑discriminatory service.
- Decision: Proceed with guardrails (PII redaction, refusal rules, human review for escalations), transparency notice, and monthly drift checks. Residual risk accepted by the product VP.
Maturity rubric (useful for your roadmap)
- Level 1 — Ad hoc: Case‑by‑case reviews; minimal records.
- Level 2 — Repeatable: Screening + basic AIIA; key roles named; simple repository.
- Level 3 — Managed: Tiered process, clear RACI, measurable KPIs/KRIs, periodic re‑assessments.
- Level 4 — Optimized: Continuous monitoring integrated with CI/CD; automated triggers; portfolio‑level risk insights.
Suggested RACI for Clause 5.1 process
Example RACI for Clause 5.1 process
Task | Product/ | Risk/ | Legal/ | Security | RAI/ | Exec Sponsor |
Define triggers & workflow | R | A | C | C | C | I |
Screening | R | A | C | C | C | I |
Full AIIA drafting | R | A | C | C | C | I |
Approvals & risk acceptance | C | A | A (privacy) | C | C | A |
Monitoring & re‑assessment | R | A | C | C | C | I |
(R=Responsible, A=Accountable, C=Consulted, I=Informed)
KPIs/KRIs you can track
- % AI systems with current AIIA
- Median AIIA turnaround time
- of assessments by risk tier (screened vs full)
- of incidents or policy exceptions; time‑to‑mitigate
- Re‑assessment rate after significant changes
Common pitfalls (and fixes)
- Pitfall: Treating AIIA as a one‑off document.
- Fix: Add triggers and monitoring with scheduled reviews.
- Fix: Add triggers and monitoring with scheduled reviews.
- Pitfall: Over‑engineering for low‑risk use cases.
- Fix: Use a tiered process with lightweight screening.
- Fix: Use a tiered process with lightweight screening.
- Pitfall: Missing external prohibitions or regulator expectations.
- Fix: Include a legal/regulatory checklist in every AIIA.
- Fix: Include a legal/regulatory checklist in every AIIA.
- Pitfall: Unclear ownership of residual risk.
- Fix: Name an approver and record explicit acceptance.

