Spotlight on: AS ISO/IEC 42001:2023, Artificial intelligence — Management system
.jpg)
Artificial intelligence (AI) now underpins products, services and internal processes. With this growth comes new governance questions, particularly around risk, transparency, data, human oversight and ongoing performance. AS ISO/IEC 42001:2023, Artificial intelligence — Management system sets requirements for establishing, implementing, maintaining and continually improving an AI management system (AIMS) so organisations can use, develop and operate AI with clarity and consistency. Adopted as an identical Australian Standard in February 2024, it follows ISO’s harmonised structure to align with existing management systems.
What is AS ISO/IEC 42001:2023?
It is a management system standard for AI. It specifies requirements for policy, leadership, planning, support, operation, performance evaluation and improvement, plus normative controls and guidance tailored to AI (e.g., data quality, AI system impact assessment and human oversight). The text is an identical adoption of ISO/IEC 42001:2023 in Australia.
Who is this standard for?
- CIOs, CTOs and Chief Data/AI Officers – to establish AI policy, scope and governance that align with strategy.
- Product owners and delivery leads – to build AI life‑cycle controls into roadmaps, releases and change processes.
- Data science/ML and engineering teams – to document data, models, evaluation and monitoring with objective criteria.
- Risk, audit and compliance – to apply AI risk assessment, impact assessment, internal audit and management review.
- Information security and privacy officers – to integrate AI controls with ISO/IEC 27001 and ISO/IEC 27701 programs.
- Legal and procurement – to allocate responsibilities across suppliers and customers, with reporting and incident plans.
- HR and training – to define competence and awareness for roles that interact with or oversee AI systems.
What does AS ISO/IEC 42001:2023 cover?
- Scope & terms – applies to any organisation that provides or uses products/services that utilise AI systems; leverages ISO/IEC 22989 terminology.
- Leadership & policy – AI policy, roles and accountabilities across the AI life cycle.
- Planning – AI risk assessment and treatment; AI system impact assessment process; measurable AI objectives.
- Support – resources (data, tooling, compute, human), competence, awareness, communication, documented information.
- Operation – operational planning and control, ongoing risk assessment/treatment, and periodic impact assessments.
- Performance – monitoring/measurement, internal audit and management review.
- Improvement – continual improvement, nonconformity and corrective action.
- Annex A (normative) – reference control objectives and controls (e.g., AI policy, roles, data quality, user information, logs, human oversight, suppliers/customers).
- Annex B (normative) – implementation guidance for all Annex A controls.
- Annex C/D (informative) – example objectives/risk sources and integration with other MSS (e.g., ISO/IEC 27001, ISO/IEC 27701, ISO 9001).
Why does this matter?
As organisations increasingly rely on AI to make or support decisions. 42001 can support the structure of how teams plan, build, deploy and monitor AI so the organisation can:
- Address risk and impacts early with repeatable AI risk assessment and AI system impact assessment methods embedded in delivery.
- Operate with transparency by documenting intended use, limitations, data provenance and performance targets, then communicating what users need to know.
- Sustain performance over time using monitoring, logs, retraining triggers and management reviews—important where data drift or continuous learning may change system behaviour.
- Align with existing programs by integrating AI governance with information security, privacy and quality management frameworks via the harmonised ISO structure.
This approach supports practical questions teams often raise: What documents do we need? Who approves models? How do we judge acceptable error? What’s our plan if performance drops? What do we tell users? 42001 provides requirements and control guidance to answer these consistently across projects and suppliers.
What’s next?
- Adoption and integration – organisations can align 42001 with existing ISO programs (e.g., 27001/27701/9001) and embed controls into delivery tooling and vendor management.
- Local engagement – Australia’s mirror committee IT‑043, Artificial Intelligence supports the national viewpoint; monitor Standards Australia channels for proposals or updates.
- Related guidance – see ISO/IEC 23894 (AI risk management), 25059 (quality model for AI systems) and NIST AI RMF referenced in the bibliography for complementary practices.
Explore the standard
AS ISO/IEC 42001:2023 is available through the Standards Australia Store and our distribution partners.
Frequently Asked Questions
Is AS ISO/IEC 42001 certifiable?
The standard includes internal audit and management review requirements that support auditability. Organisations may adopt it for internal conformance or seek third‑party assessment where available through appropriate conformity assessment bodies. The text itself specifies requirements; certification arrangements are outside its scope.
How does 42001 relate to ISO/IEC 27001 and ISO/IEC 27701?
42001 shares ISO’s harmonised structure and includes guidance on integrating AI governance with information security and privacy management systems. Annex D highlights joint use so AI‑specific controls sit coherently alongside security and privacy controls.
What core documents should we expect to maintain?
Typical artefacts include: AI policy, AIMS scope, AI risk assessment and treatment plan with a statement of applicability, AI system impact assessments, competence records, operational monitoring and event logs, internal audit reports and management review outputs.
How does the standard handle data quality and provenance?
Annex B details controls for data quality, provenance, acquisition and preparation, recognising their effect on fairness, performance and explainability. It encourages documenting sources, labelling, transformations and known biases across the AI life cycle.
Which sectors benefit most?
The AIMS model is sector‑agnostic. Annex D lists examples such as health, transport, finance, employment, defence and energy, but any domain that deploys AI at scale can apply it.
media enquiries
For media enquires, please contact:
