ISO 42005 §5.1 — General:
Establishing a Structured AI System Impact Assessment Process

Cyberzoni Iso 42005 Clauses 5.1

Clause 5.1 asks you to design and use a consistent approach for assessing the impacts and risks of your AI systems.

The exact steps can vary, but your approach should reflect your organizational context and risk appetite, the intended use of each AI system, and the external environment (laws, regulator stances, cultural expectations, and market trends).

  1. A documented, repeatable process to assess AI impacts/risks.
  2. Specifying based on internal factors (governance, policies, objectives, contracts, intended use, risk appetite).
  3. Attention to external factors (legal requirements and prohibitions, regulator guidance, incentives/penalties, culture/ethics, competitive trends).
  4. Awareness that Clause 5 (as a whole) lays out elements you can include in your assessment process.

Define triggers

Run an AI impact assessment (AIIA) when any of the following occur:

  • New AI system or significant feature
  • Major model change, retraining, or prompt/policy shift
  • New data sources
  • New users or use cases
  • Repurposing
  • Geographic expansion
  • High‑risk integrations
  • Post‑incident review

Adopt a tiered approach:

  • Screening (quick questionnaire) → route low‑risk systems to lightweight controls; medium/high to full AIIA.
  • Full AIIA includes context, risk analysis, mitigations, sign‑off, and monitoring plan.

Define who drafts, who reviews, and who approves.

  • AIIA Owner (product/AI lead)
  • Risk/Compliance
  • Legal/Privacy
  • Security
  • Data/Model lead
  • Responsible AI/Ethics
  • Business sponsor
  • Organizational governance, objectives, policies/procedures
  • Contractual obligations
  • Intended use and users of the AI system
  • Declared risk appetite and criticality
  • Applicable laws and any prohibited uses
  • Regulator policies/guidance and enforcement posture
  • Incentives/penalties tied to the intended use (e.g., sector rules)
  • Cultural norms, values, ethics around the use case
  • Competitive landscape and product/service trends using AI

Consider safety, security, privacy, bias/fairness, explainability, robustness, IP/contract issues, human oversight, environmental and societal impact.

  • Residual risk vs. risk appetite
  • Go/no‑go or go‑with‑mitigations
  • Commit to controls, owners, due dates, and acceptance of residual risk.
  • Formal sign‑off (names, dates, versions).
  • Store assessments, evidence, and decision logs in a durable system.
  • Define KPIs/KRIs (e.g., model performance, drift, incident rates).
  • Set review cadence and re‑assessment triggers.

Internal vs external factors (quick reference)

Examples to capture in your AIIA

Internal

  • Strategy alignment
  • AI governance roles; relevant policies (e.g., data, privacy, security, RAI)
  • contracts and SLAs
  • system purpose, users, and context
  • declared risk appetite
  • criticality/business impact.
  • etc.

External

  • Jurisdictional laws and prohibited uses
  • regulator guidance or case law
  • sector incentives/penalties
  • cultural norms/ethics and stakeholder expectations
  • competitor practices and market trends.
  • etc.

Evidence & artifacts to keep

  • AIIA screening form + full assessment report
  • Context pack (use case description, data lineage, model card/summary)
  • Legal/privacy mapping (e.g., DPIA link if applicable)
  • Risk register entries & mitigation plan
  • Approval record (who accepted what residual risk and when)
  • Monitoring plan and review logs

Use case: Customer‑support assistant that drafts responses.

  • Internal: High volume, medium business criticality; policy requires human‑in‑the‑loop; moderate risk appetite.
  • External: Consumer‑protection and privacy laws apply; regulator guidance stresses transparency; strong cultural expectation of non‑discriminatory service.
  • Decision: Proceed with guardrails (PII redaction, refusal rules, human review for escalations), transparency notice, and monthly drift checks. Residual risk accepted by the product VP.
  • Level 1 — Ad hoc: Case‑by‑case reviews; minimal records.
  • Level 2 — Repeatable: Screening + basic AIIA; key roles named; simple repository.
  • Level 3 — Managed: Tiered process, clear RACI, measurable KPIs/KRIs, periodic re‑assessments.
  • Level 4 — Optimized: Continuous monitoring integrated with CI/CD; automated triggers; portfolio‑level risk insights.

Example RACI for Clause 5.1 process

Task

Product/
AI Lead

Risk/
Compliance

Legal/
Privacy

Security

RAI/
Ethics

Exec Sponsor

Define triggers & workflow

R

A

C

C

C

I

Screening

R

A

C

C

C

I

Full AIIA drafting

R

A

C

C

C

I

Approvals & risk acceptance

C

A

A (privacy)

C

C

A

Monitoring & re‑assessment

R

A

C

C

C

I

(R=Responsible, A=Accountable, C=Consulted, I=Informed)

 

  • % AI systems with current AIIA
  • Median AIIA turnaround time
  • of assessments by risk tier (screened vs full)
  • of incidents or policy exceptions; time‑to‑mitigate
  • Re‑assessment rate after significant changes
  • Pitfall: Treating AIIA as a one‑off document.
    • Fix: Add triggers and monitoring with scheduled reviews.

  • Pitfall: Over‑engineering for low‑risk use cases.
    • Fix: Use a tiered process with lightweight screening.

  • Pitfall: Missing external prohibitions or regulator expectations.
    • Fix: Include a legal/regulatory checklist in every AIIA.

  • Pitfall: Unclear ownership of residual risk.
    • Fix: Name an approver and record explicit acceptance.