ISO 27001 Compliance for AI and LLM Deployments
ISO 27001:2022 introduced new controls directly relevant to AI and cloud-based information systems. For Australian organisations deploying custom LLMs, understanding how AI fits within your Information Security Management System (ISMS) is not optional: it determines whether your certification remains valid and whether your customers and partners can rely on your security posture.
Why AI Creates New ISO 27001 Obligations
ISO 27001:2022 was released shortly before the explosion of enterprise AI adoption. While the standard does not mention LLMs directly, numerous controls in Annex A apply directly to AI systems. More importantly, using public AI platforms may create non-conformances with existing controls that organisations have held for years. A private LLM deployment, by contrast, can be designed to satisfy these controls from the outset.
Public AI Creates ISMS Non-Conformances
When employees use ChatGPT or Copilot to process information assets covered by your ISMS scope, they may be transferring assets to a third-party system without a supplier security agreement, without data classification controls, and without approval from the information owner. This creates potential non-conformances against Annex A 5.19 (information security in supplier relationships), 5.12 (classification of information), and 5.23 (information security for use of cloud services).
ISO 27001:2022 Annex A 5.23: Cloud Services
Control 5.23 requires organisations to establish processes for acquiring, using, managing, and exiting cloud services consistent with information security requirements. Using a public AI platform for processing sensitive organisational information is a cloud service acquisition. Without a formal risk assessment, supplier agreement, and ongoing monitoring, your use of public AI platforms may be a direct non-conformance that an auditor will find.
Private LLM as a Conformant Architecture
A correctly designed private LLM deployment satisfies the relevant ISO 27001 controls that public AI platforms cannot. Data stays within the organisation's defined boundary, supplier relationships are within your existing IT infrastructure arrangements, and the system can be included in your asset inventory, risk assessment, and monitoring framework. The architecture makes compliance demonstrable, not just claimed.
ISO 27001 Controls and How Private LLM Satisfies Them
The following Annex A controls from ISO 27001:2022 are most directly relevant to AI and LLM deployments. For each, we explain what private deployment enables and what public AI creates.
A 5.12: Classification of Information
Organisations must classify information by sensitivity and apply appropriate handling controls. AI systems that process information must respect that classification.
- Private LLM: access controls can be configured by information classification level
- Private LLM: the system processes only information appropriate to its clearance level
- Public AI risk: employees frequently paste classified information into general-purpose AI without assessment
- Implementation: tag documents in RAG index with classification metadata and enforce retrieval controls
A 5.19 and 5.20: Supplier Relationships
Controls 5.19 and 5.20 require that information security requirements be included in supplier agreements and that those requirements are monitored. AI providers are suppliers.
- Private LLM: the AI system runs on your existing infrastructure supplier relationships
- Public AI risk: ChatGPT, Copilot, and similar tools may lack the data processing terms required
- Required: documented supplier agreements that include information security requirements
- Implementation: include AI infrastructure providers in your supplier register with annual reviews
A 5.23: Information Security for Cloud Services
This new 2022 control requires organisations to establish processes for cloud service use, including service acquisition, usage management, and exit strategies.
- Private LLM: cloud-hosted sovereign deployment on existing cloud service agreements
- Private LLM: on-premises deployment eliminates the cloud service relationship entirely
- Public AI risk: AI API use typically does not go through formal cloud service acquisition processes
- Required: AI tools included in cloud service register with risk assessment and review schedule
A 8.10: Information Deletion and A 8.12: Data Leakage Prevention
Information must be deleted in accordance with defined policies, and information leakage to external systems must be prevented for sensitive assets.
- Private LLM: deletion on your schedule with full data lifecycle control
- Private LLM: no information leakage to external AI providers by design
- Public AI risk: prompts containing sensitive data cannot be deleted from provider systems
- Required: data handling procedures documenting LLM processing and retention
A 5.36: Compliance With Information Security Policies
Organisations must monitor compliance with their own information security policies. If your policy says sensitive data must not leave the organisation's control, AI usage must be audited against this.
- Private LLM: full audit logging of all system interactions for policy compliance monitoring
- Private LLM: technical controls prevent information leaving the defined boundary
- Public AI risk: employee AI usage is often invisible to security monitoring
- Required: AI usage policy aligned to existing information security policies, with monitoring
A 8.8: Management of Technical Vulnerabilities
Technical systems must be managed for vulnerabilities, including timely patching and security updates. AI model updates and patch management are covered by this control.
- Private LLM: model and infrastructure patching within your existing vulnerability management program
- Private LLM: vulnerability disclosure and patching cycle documented for ISMS evidence
- Required: AI infrastructure included in vulnerability scanning scope
- Required: model update and patching procedures documented in your change management process
How to Include AI in Your ISO 27001 ISMS
Bringing AI systems into ISMS scope requires adding them to your risk register, updating your asset inventory, and extending existing controls to cover the new technology.
AI Asset Inventory and Classification
Identify all AI tools in use across the organisation, classify each by the information they process, and determine which fall within ISMS scope. This typically reveals previously unknown public AI usage.
Risk Assessment for AI Systems
Conduct information security risk assessments for each AI system in scope, including risk of data leakage, supplier failure, and inappropriate use. Public AI platforms typically produce higher residual risk scores than private deployment.
Control Implementation and Gap Analysis
For each AI system in scope, map existing controls and identify gaps. Private LLM deployments designed with ISMS alignment can satisfy most controls. Public AI platforms typically require additional governance measures or risk acceptance.
Documentation, Audit Evidence, and Review Schedule
Update your Statement of Applicability, risk treatment plan, and supplier register to reflect AI systems. Establish a review schedule for AI risk assessments, typically aligned to your annual ISMS review cycle.
Building an ISMS-Aligned AI Deployment
A private LLM deployment designed from the start for ISMS alignment is significantly easier to certify and maintain than one retrofitted after the fact.
Technical Controls for ISMS Compliance
These technical controls, built into the private LLM architecture, directly satisfy relevant Annex A requirements.
- Complete audit logging for all system interactions (A 8.15, A 5.36)
- Role-based access controls by information classification (A 5.15, A 5.18)
- Data encryption at rest and in transit (A 8.24)
- Network isolation preventing unauthorised data egress (A 8.20)
- Automated backup and recovery testing (A 8.13)
Documentation for Certification Evidence
We provide documentation packages that support your ISO 27001 certification audit for the AI system.
- System description and architecture diagram for auditor review
- Data flow documentation showing information processing boundaries
- Supplier agreements and due diligence for infrastructure providers
- Risk assessment records specific to the AI system
- Control implementation evidence for relevant Annex A controls
Related AI Solutions
LLM Security and Data Privacy
Technical detail on the security architecture that underpins ISO 27001 compliance for LLM deployments.
Understand LLM security →APRA CPS 234 AI Compliance
For APRA-regulated financial institutions, the specific compliance requirements that overlap with ISO 27001 AI controls.
Understand APRA AI compliance →Sovereign AI Australia
How Australian data sovereignty requirements interact with ISO 27001 supplier and cloud service controls.
Learn about sovereign AI →Frequently Asked Questions
Design Your AI Deployment for ISO 27001 Compliance From Day One
Talk to us about a private LLM deployment that satisfies your ISMS requirements and supports your certification audit with complete control evidence.