APRA's CPS 230 Operational Risk Management standard became effective on 1 July 2025. For APRA-regulated financial services entities — ADIs, insurers, RSE licensees — it significantly raises the bar on how operational risks are identified, managed, and governed. But its most consequential effect for many organisations is not on internal operations. It is on the technology supply chain.
What CPS 230 actually requires
CPS 230 requires regulated entities to maintain operational resilience — the ability to continue delivering critical operations through disruption. The standard goes further than its predecessors in three important ways:
Boards are personally accountable. CPS 230 makes the board responsible for approving the organisation's operational risk management framework and for ensuring that critical operations can be maintained. This is not a management-level accountability. If a critical operational system fails and the board cannot demonstrate it had adequate oversight of that system's resilience, that is a governance failure with real regulatory consequences.
Material service providers are explicitly in scope. Where a regulated entity uses a third-party service provider for critical operations — including technology platforms, AI services, automation tools, and voice platforms — the regulated entity must manage the operational risk that provider represents. The standard requires documented service provider risk assessments, contractual resilience obligations, and governance of the provider relationship.
AI and automation are not exempt. CPS 230 does not carve out emerging technology. Any system that plays a role in critical operations — whether it is an AI platform handling client communications, a workflow automation tool processing claims, or a voice platform managing after-hours enquiries — is subject to the same resilience and governance requirements as any other operational system.
1 July 2025
CPS 230 applies to all APRA-regulated entities including ADIs, general insurers, life companies, and RSE licensees.
Personal, not delegable
Boards approve the operational risk management framework and are accountable for critical operation resilience — including the technology supply chain.
Material service providers
Any vendor providing services for critical operations must be assessed, contracted to resilience standards, and governed on an ongoing basis.
What this means for AI and automation vendors
For any financial services organisation using AI or automation in client-facing or operationally significant workflows, CPS 230 creates a direct obligation to assess and govern those vendors. The questions that matter are no longer just "does this product work?" and "what does it cost?". The questions are now:
- Can this vendor demonstrate controlled configuration — that changes to the AI's behaviour follow a governed process rather than autonomous adaptation?
- Does the vendor maintain a full audit trail of every interaction, decision, and configuration change?
- Where is client data processed and stored? Does the vendor's data residency model satisfy your data governance obligations?
- What are the vendor's incident detection, notification, and response commitments?
- Can the vendor produce evidence — not assertions — of the security controls they operate?
- What happens to your data and your operations if the vendor is disrupted or ceases to operate?
"A vendor that cannot answer these questions credibly is not just a poor product choice — under CPS 230, engaging them for critical operations is a governance failure that sits with the board."
The specific risk with AI platforms
AI platforms introduce a category of operational risk that traditional technology risk frameworks weren't designed to address: autonomous behaviour. A conventional software system does what it is programmed to do. An AI system that learns, adapts, or updates its own behaviour based on operational data is a system whose behaviour at time T+6 months may be meaningfully different from its behaviour at deployment — without a change management process, without a version record, and potentially without the regulated entity being aware.
Under CPS 230, this is not an acceptable operating model for a system involved in critical operations. The standard requires that operational risk be managed, not just monitored. An AI system that can modify its own behaviour is an unmanaged operational risk.
This is why Taidotech's design position on Sophie is not just a product feature — it is a response to a genuine regulatory requirement. Sophie does not self-modify in production. Every change to conversation flows, triage logic, alerting rules, and integration configuration follows a controlled process: proposed, tested in non-production, reviewed, and deployed with version control and rollback capability. For financial services clients, this is not optional governance — it is what CPS 230 requires of any material service provider operating AI in critical workflows.
What to ask your AI vendor before engaging them for critical operations
The following questions are a starting point for the vendor due diligence that CPS 230 requires. Any vendor that cannot answer them clearly and specifically should not be engaged for operations that could be classified as critical under your CPS 230 framework:
- Controlled configuration: How are changes to the AI's behaviour managed? Is there a change management process? Version control? Rollback capability? Who approves changes?
- Audit trail: What is logged? How is it stored? For how long? Can it be produced on request? Is it tamper-evident?
- Data residency: Where is data processed and stored? What sub-processors are used? Are any in jurisdictions that create data sovereignty concerns?
- Incident response: What is the vendor's incident detection and notification commitment? Within what timeframe will you be notified of a security incident affecting your data?
- Resilience: What is the vendor's uptime commitment? What happens to your data and operations if the vendor is disrupted?
- Certifications: What security certifications does the vendor hold or have on their roadmap? If certifications are on the roadmap but not yet achieved, what is the timeline and what evidence exists of progress?
The honest position on where most AI vendors stand
Most AI voice and automation vendors were not built with CPS 230 in mind. They were built for speed to market, for SME use cases, and for commercial environments where the consequences of a governance failure are limited to a poor customer experience rather than a regulatory breach. Their architecture reflects that — client data in multi-tenant environments with limited isolation, AI behaviour that adapts without controlled change management, and security documentation that is a marketing page rather than an evidence pack.
The vendors that will survive in the financial services market under CPS 230 are the ones that treat governance as an architectural requirement rather than a sales objection. For procurement teams and boards doing their CPS 230 due diligence, the distinction is not subtle — it is visible in the architecture, in the documentation, and in how the vendor responds to the questions above.