As we wrap up Levo’s launch week and mark another anniversary, it feels like the right time to reflect.
For every founder I know, anniversaries are a chance to check whether the company you set out to build is still solving the problem that first kept you up at night.
I still remember my early days as an engineering director, when my developers would hit a wall: a critical feature ready to ship, yet it sat idle for days while security teams ran scans meant for applications that didn’t resemble ours at all.
We were told to be patient and stay compliant, because vulnerabilities could cost us a breach. But when the scans finally finished, they almost always buried us under thousands of alerts, causing even more delay.
So the momentum, competitive edge, and revenue we lost were traded for a security posture that was no better than where we started.
That frustration wasn’t about speed alone; it was about legacy security making growth and security feel like a trade-off.
I founded Levo to change that. From day one, our goal wasn’t just to fill a technical gap. It was to re-architect how security fits into software development so it’s continuous, contextual, comprehensive, and proactive, and becomes a growth enabler instead of a blocker.
Our first stride toward that vision was API security, and over the past few quarters, we’ve expanded it to secure the next frontier of innovation: AI applications.
This year’s Launch Week celebrated that evolution. We introduced four new AI security modules built on our proven runtime DNA, and we doubled down on our core API capabilities so every innovation you ship (whether driven by APIs, MCP servers, or AI models) is secure by design.
It’s been incredible to see the original vision expand with this momentum, and we’re grateful to everyone who’s been part of the journey. If you missed any of the launches, here is a quick recap:
Day 1: AI Firewall
Our new AI Firewall protects custom, in‑house AI applications end‑to‑end. Deployed inline at the ingress point, it watches every prompt, retrieval, and tool call so you can defend against novel attacks like jailbreaks, prompt injection, model extraction, and sensitive‑data exfiltration. It provides 360° runtime visibility, policy‑driven blocking, and audit‑grade logging, allowing teams to roll out copilots, agents, and AI‑powered experiences without losing control or exposing proprietary data. In short, it turns AI from a risky experiment into a trusted service ready for production.
Day 2: AI Gateway
AI adoption doesn’t just come from building your own models; employees are already using third‑party LLM tools and AI‑enabled SaaS. The AI Gateway gives security and compliance teams a single pane of glass to manage that usage: it discovers which tools are in use, binds identities to activity, inspects prompts for sensitive data, enforces allowlists and quotas, centralizes credentials, and tracks spend. With lightweight policies written in YAML or Python, enterprises can allow generative AI where it adds value and block it where it creates risk, replacing blanket bans with governed freedom.
Day 3: MCP Discovery
As organizations experiment with autonomous agents, new infrastructure appears: Model Control Plane (MCP) servers that host tools and orchestrate agent actions. MCP Discovery continuously inventories these servers across laptops, clouds, and remote environments, differentiating approved from unapproved instances. It scores their risk based on exposure and behaviour, maps the data and privileges they access, and allows security teams to selectively block high‑risk servers. This gives enterprises the basic visibility they need to scale agents responsibly.
Day 4: MCP Security Testing
To take agents into production, you need more than an inventory; you need proof they’re safe. MCP Security Testing automatically validates MCP servers the way they will be used in ambiguous, chain‑of‑thought conversations that trigger tool calls and downstream actions. It checks for token mismanagement, privilege escalation, command injection, prompt injection, and other attack classes, producing reproducible traces and prioritized remediation guidance. This reduces approval cycles and makes agent rollouts predictable rather than perilous.
Day 5: Integration and Agentless Innovation
1. API Security integrations:
We’re making API security more collaborative by meeting DevSecOps teams where they already work. With Postman, teams can import existing collections into Levo to jumpstart inventories, and Levo continually updates those collections from live traffic, keeping docs accurate. With Checkmarx, Levo can build API inventories directly from code repositories and run exploit‑aware tests in CI/CD, unifying API security with the rest of the AppSec workflow.
2. AI Security integrations:
Many teams standardize AI traffic using frameworks such as LiteLLM and Portkey. Our new integrations plug Levo’s runtime visibility and guardrails into those frameworks so customers get sensitive‑data detection, prompt‑injection defence, and usage governance without rewriting their AI stack. That means faster adoption, fewer surprises, and a single control plane across AI and APIs.
3. Next-Gen Agentless API Discovery:
Finally, we’re doubling down on our core mission with two agentless discovery paths that remove deployment friction entirely. First, Fully Agentless API Discovery builds a complete API inventory without deploying agents or integrating with existing security agents, so teams have an immediate baseline in sensitive, high-traffic environments where visibility is often delayed.
Second, Web App Scanner offers a fast, outside-in starting point by logging into your web application, exploring real user flows, and mapping the API endpoints your app actually uses, without deploying anything in your environment.

.jpg)




