TL;DR
- Treat AISPM as a continuous program aligned to Govern, Map, Measure, Manage from NIST AI RMF.
- Your checklist must cover agents, MCP servers, RAG pipelines, model endpoints, identities, secrets, and data exposure paths, not just cloud configs.
- Use OWASP LLM Top 10 as a practical risk taxonomy for controls and testing.
- Build evidence like an audit system: inventory, ownership, approvals, logs, and remediation SLAs. ISO/IEC 42001 is a strong governance anchor.
The 2026 AISPM checklist
Inventory and ownership
- Maintain a continuously updated inventory of AI applications, LLM apps, agents, MCP servers, RAG stores, model endpoints, and training pipelines.
- Assign an owner, environment, business purpose, data classification, and criticality to every AI asset.
- Record all third-party AI services and connectors used by each AI system.
Identity and access control
- Map identities end-to-end: upstream user, service identity, and executor identity for agent actions.
- Enforce least privilege for agent tools and MCP servers, including scoped tokens and expiration.
- Centralize secrets in a secrets manager, block hardcoded keys, and enforce rotation intervals.
- Require strong authentication for all model endpoints and tool gateways.
Data governance for AI
- Track training, fine-tuning, and RAG corpora sources, including provenance and approval.
- Classify data allowed in prompts, tool context, retrieval, and outputs. Align to your org’s sensitive data classes.
- Implement output logging rules: what is logged, how long, and how it is redacted.
RAG and embeddings controls
- Validate ingestion sources, strip active content where applicable, and maintain a corpus change log.
- Apply access control to the vector store and retrieval layer, not just the AI app.
- Monitor retrieval volume anomalies and out-of-domain retrieval patterns.
Agent and tool governance
- Maintain a tool catalog: every tool an agent can call, what it does, required permissions, and blast radius.
- Require explicit approval for any tool that can mutate state, such as ticket creation, payments, or database writes.
- Add guardrails for high-risk actions: step-up auth, human approval, or policy blocks.
MCP server governance
- Inventory MCP servers as first-class assets and require authentication, authorization, and least privilege tokens.
- Review MCP server configurations and code paths as you would for a privileged integration service.
- Log and audit tool calls: who requested, what tool, what arguments, what outcome.
Model endpoint and inference controls
- Enforce endpoint authentication, rate limiting, and request validation.
- Restrict model usage by role and environment.
- Monitor for unusual invocation patterns, repeated retries, and abnormal error bursts.
Pipeline and supply chain controls
- Maintain model artifact lineage: datasets, code version, parameters, and evaluation results.
- Scan dependencies and images, and enforce secure build practices aligned to NIST SSDF principles.
- Require integrity controls for artifacts (signing, controlled registries, access logs).
LLM risk taxonomy mapping
- Map controls to OWASP LLM Top 10 categories, especially prompt injection and sensitive data disclosure.
- Use MITRE ATLAS for threat modeling and red team alignment.
Monitoring, detection, and response
- Enable telemetry at the AI app, agent, MCP server, retrieval, and model endpoint layers.
- Create incident playbooks for prompt injection, tool abuse, data leakage, and credential compromise.
- Align operational deployment guidance with CISA’s secure deployment best practices where applicable.
Remediation and governance rhythm
- Define SLAs by severity and route findings to owners automatically.
- Track closure rates, recurring issues, and policy exceptions.
- Use NIST AI RMF functions to structure program reporting and continuous improvement.







.jpg)