Levo.ai launches production security modules Read more

December 2, 2025

AI Security

AI Security and APRA CPS 234 Compliance

Photo of the author of the blog post
Buchi Reddy B

CEO & Founder at LEVO

Photo of the author of the blog post
Levo AI Security Research Panel

Research Team

AI Security and APRA CPS 234 Compliance

TL;DR

  • AI systems introduce new risk categories for financial institutions.
  • APRA CPS 234 requires control, visibility and assurance across all information assets including AI models and pipelines.
  • AI creates new attack vectors across APIs, training data, model inference, and third party systems.
  • Financial institutions must strengthen security controls for AI to meet CPS 234 expectations.
  • Levo provides continuous visibility and governance for API driven and AI supported systems.

Introduction

Financial institutions now rely on AI for credit scoring, fraud detection, customer analysis, underwriting and operational automation.

AI systems process sensitive information, make high impact decisions and operate across complex architectures.

As AI adoption grows, so do the risks. Threats such as data poisoning, model manipulation, bias exploitation, shadow APIs and insecure inference endpoints are becoming mainstream concerns.

APRA CPS 234 requires every information asset to be identified, protected and monitored.AI systems are now part of this obligation and institutions must understand how AI security fits into the broader compliance picture.

What AI Security Means in the Context of CPS 234

AI security is the practice of protecting AI models, data pipelines, inference endpoints and surrounding systems from threats that compromise integrity, confidentiality or availability.

CPS 234 does not explicitly mention AI, but it defines information assets broadly.

This means AI models, training data, embeddings, inference APIs, vector databases and third party AI services are all information assets that require controls.

In the context of CPS 234, AI security must include

  • Assurance of data quality
  • Protection against manipulation
  • Monitoring for anomalies
  • Secure API exposure
  • Governance of third party AI services
  • Secure lifecycle management

AI systems are now part of the critical infrastructure of financial institutions which raises the stakes for compliance.

New AI Risk Categories for CPS 234

AI introduces risks that traditional security frameworks did not anticipate. Institutions must be aware of the following categories

  • Model Exploitation - Attackers probe AI models through inference APIs to extract sensitive information or reverse engineer patterns.
  • Data Poisoning - Malicious data is introduced into training pipelines to corrupt model behaviour.
  • Prompt or Input Manipulation - AI systems can be tricked into producing harmful outputs or bypassing controls.
  • Adversarial Inputs - Specially crafted inputs cause incorrect model predictions.
  • Shadow AI and Uncontrolled Tools - Employees may use unauthorised AI tools, causing sensitive data exposure.
  • Third Party AI Dependencies - Institutions rely on models and endpoints hosted by vendors which introduces new supply chain risks.
  • API Driven AI Risks - AI systems operate through APIs which require strong authentication, validation and monitoring.

Every one of these risks intersects with CPS 234 obligations.

How AI Security Maps to CPS 234 Requirements

  1. Governance and Accountability - Boards must understand AI risks and incorporate them into information security strategies.
  2. Information Security Capability - Institutions need AI specific expertise to govern model development and deployment.
  3. Information Asset Identification and Classification- AI models and AI data pipelines must be added to the information asset register.
  4. Implementation of Controls - Controls must include
    1. Model access control
    2. API security
    3. Data validation
    4. Inference monitoring
    5. Version control
    6. Adversarial testing
  5. Security Testing - Testing must include model robustness tests, red team exercises and continuous API monitoring.
  6. Incident Management - AI failures or model compromise must be included in incident response plans.
  7. Third Party Management - Vendors providing AI services must meet APRA expectations even if hosting is offshore.

Impact of CPS 234 on AI Programs

The regulation forces financial institutions to

  • Document model risk
  • Secure training and inference APIs
  • Classify AI assets
  • Secure third party AI systems
  • Produce evidence of AI controls
  • Prove that data used for training is protected
  • Ensure AI outputs do not create harmful decisions
  • Implement continuous monitoring of model behaviour

CPS 234 transforms AI from a technology project into a compliance and security responsibility.

Challenges in Securing AI Under CPS 234

Financial institutions face several challenges

  • Lack of visibility into how data flows through AI pipelines
  • Unclear ownership of AI governance
  • Rapid iteration of models without proper controls
  • API sprawl across AI microservices
  • High dependency on third party AI tools
  • Poor documentation of AI decision logic
  • Limited ability to detect model drift or anomalous outputs

These gaps result in regulatory risk if not addressed.

Why APIs Are the Most Important Part of AI Security

AI systems rely on APIs for

  • Inference calls
  • Embedding generation
  • Vector database access
  • Upstream and downstream integrations
  • Mobile and web application interactions
  • Cloud hosted model endpoints

If APIs are insecure, the AI system is insecure. This makes API governance a foundational requirement for AI compliance.

How Levo Supports AI Security for CPS 234

Levo gives you purpose built, runtime AI security and compliance across your entire AI infrastructure. Through modules like Runtime AI Visibility, AI Monitoring & Governance, AI Threat Detection, AI Attack Protection and AI Red Teaming, Levo ensures that every component from AI agents, MCP servers, LLMs, vector stores to APIs is discovered, monitored and protected.

  • Comprehensive asset discovery: Levo surfaces all AI components in runtime including unsanctioned agents or third party AI services which institutions might otherwise miss. Levo
  • Context-aware data classification and flow tracking: Sensitive data traveling through inference pipelines, API calls, embeddings, or retrieval augmented generation (RAG) flows is traced and logged enabling classification and mapping to CPS 234 information assets. Levo
  • Real-time threat detection and protection: Levo detects risky behavior such as over-permissioned agents, prompt or tool misuse, data exfiltration attempts, runaway tasks or anomalous model behavior before they turn into security incidents. Levo
  • Adversarial testing and continuous assurance: Through AI Red Teaming, Levo helps simulate prompt injection, poisoning, collusion or other attacks giving assurance that controls and governance work not just in theory but in real world runtime situations.
  • Audit-ready evidence for compliance and reporting: All AI activity data flows, identity mappings, agent actions, policy enforcement events, are logged in a way that meets regulatory standards for CPS 234 reporting and evidence requirements.

Interested to See Levo in Action

If you want to secure AI models and APIs and meet CPS 234 obligations with confidence, you can book a demo with the Levo team. We will show you how Levo monitors AI systems, secures data flows and builds continuous compliance into your technology stack.

Conclusion

AI is reshaping financial services but it also introduces new risks that existing tools cannot manage. CPS 234 requires strong protection of every information asset including AI models and pipelines. Levo provides the visibility, automation and assurance needed to keep AI secure and compliant.

FAQs

Does APRA CPS 234 apply to AI systems

Yes. CPS 234 applies to every information asset including AI models, training pipelines, inference APIs and data stores. Even if CPS 234 does not explicitly use the words artificial intelligence, all AI related components fall under its requirements.

Are AI models considered information assets under CPS 234

Yes. Any model that processes or stores information is an information asset and must be identified, classified, protected and monitored.

How does AI increase compliance risk under CPS 234

AI introduces more complex data flows, higher reliance on sensitive data, new attack vectors and a larger surface area for misconfigurations. These risks require advanced controls and visibility.

Do third party AI vendors fall under CPS 234 oversight

Yes. If a regulated entity uses a vendor hosted AI model or AI service, the institution is still accountable for ensuring that the vendor meets CPS 234 expectations.

What AI specific controls are expected under CPS 234

Controls may include model access control, API protection, data validation, adversarial testing, drift monitoring, version control and evidence generation for audits.

Does CPS 234 require AI security testing

Yes. CPS 234 requires regular testing of controls. For AI this includes model robustness testing, inference monitoring and adversarial evaluations.

How does Levo help with AI security for CPS 234

Levo provides discovery of AI related APIs, monitors sensitive data flows, identifies anomalies in model usage, enforces governance and produces continuous compliance evidence.

ON THIS PAGE

We didn’t join the API Security Bandwagon. We pioneered it!