Zscaler focuses on the enterprise “AI footprint” with a new security suite

The race to deploy Generative Artificial Intelligence and, above all, Agentic AI in organizations faces an unglamorous challenge: security is falling behind. With each new copiloto, each model API, each agent automating tasks, and every SaaS integration, the traditional perimeter becomes more blurred… and visibility becomes more complex.

This is the starting point of the announcement that Zscaler made public on January 27, 2026: a suite of innovations grouped into their AI Security Suite, aimed at enabling IT and security teams to discover, classify, and govern the actual use of AI within the organization, without hindering adoption.

The new vulnerability: AI everywhere… and no one sees the whole picture

In the announcement, the company acknowledges a pattern recognized by every administrator and CISO: most companies don’t usually have a complete inventory of which AI tools are being used (shadow GenAI, development environments, embedded AI in SaaS, models, agents, and infrastructure). This prevents accurate measurement of exposure, data access, and risks.

Adding to this map deficiency is a second problem: AI traffic and flows don’t behave like traditional web traffic. According to Zscaler, “non-human” patterns and new protocols make it difficult to apply policies and controls using traditional tools.

Parallel to this, Zscaler links the announcement to a striking data point included in their ThreatLabz 2026 AI Security Report: the firm states that, in their analysis, “most” enterprise AI systems could be compromised within 16 minutes, with critical flaws present in 100% of the systems studied.

Three fronts: inventory, secure access, and lifecycle protection

The suite is structured around three use cases that, practically, cover the “before, during, and after” stages of AI adoption:

  • AI Asset Management: inventory of applications, models, agents, infrastructure, and usage, aimed at detecting shadow AI and prioritizing risks based on the data each service touches.
  • Secure Access to AI: Zero Trust controls to enable approved services, with online inspection and prompt classification to reduce data leaks or misuse without sacrificing productivity.
  • Secure AI Infrastructure and Apps: protection of AI development from “build to runtime,” including automated red teaming, prompt hardening, and runtime safeguards.

In more practical terms: Zscaler aims to turn the chaos of tools, APIs, extensions, and agents into a sort of live AI CMDB, with relationships, dependencies, and data context.

MCP, agents, and security arriving late

One of the most relevant details for system profiles is that Zscaler explicitly includes in its inventory elements like MCP servers, agents, and models, alongside the underlying infrastructure. This points to an emerging scenario: companies beginning to deploy “layers” that connect agents and tools to internal and external resources programmatically.

In this context, the company also mentions expanding capabilities with a MCP gateway for secure automation and the use of AI Deception to divert and neutralize “model-based” attacks.

And here’s an important nuance: the debate is no longer just “block or allow ChatGPT,” but governing conversations, actions, and permissions of systems acting with some autonomy (or at least automating steps that were previously manual).

Usage metrics: more AI, more blocking, and more data at stake

Beyond marketing claims, the ThreatLabz report serves as a trend thermometer. On its campaign page for the 2026 report, Zscaler reports observing 1 trillion AI/ML transactions in 2025, with a 91% year-over-year growth. Additionally, they state that their platform identified roughly 3,400 applications generating AI/ML-related traffic.

Two figures stand out for anyone managing proxies, CASB, or DLP: Zscaler indicates that 39% of AI/ML transactions were blocked (due to data protection or internal policies). They also estimate a total of 18,000 TB of data transferred via AI/ML apps.

Regarding applications, the same report highlights frequently blocked transactions on well-known enterprise apps such as Grammarly, GitHub Copilot, and ChatGPT.

Governance: NIST, EU AI Act, and “reporting for leadership”

Zscaler frames this shift as a response to the need for governance: aligning security programs with frameworks such as the NIST AI Risk Management Framework and the EU AI Act, along with providing executive-level reports on GenAI usage.

This ties into a distinctly European reality: AI is no longer just a “data team project” but a matter touching compliance, risk, audit, legal, and privacy. For systems and security, the practical requirement is clear: if you cannot explain what models are used, who accesses them, what data enters and exits… then the problem isn’t AI itself; it’s a lack of control.


Frequently Asked Questions

What is “shadow AI” and why does it concern IT?
It’s the use of AI tools without corporate approval or control. Usually involving risks of data leaks, non-compliance, and lack of traceability regarding what information is shared with external models or services.

How does agentic AI change the security approach?
Because it doesn’t just “respond”: it can perform actions (consult systems, trigger tasks, modify tickets, access repositories). This raises the potential impact of permission abuse, malicious prompts, or insecure integrations.

What does an “AI footprint” inventory offer over traditional category-based blocking?
Generic blocking can hinder productivity or leave gaps. An inventory with dependencies allows seeing what is actually used, which data each service touches, and applying fine-grained policies (by user, app, data type, context, etc.).

Which controls are most critical to prevent leaks when using GenAI in a business?
Usage visibility, data classification, online inspection, clear policies (which tools are permitted, for what cases, and with what data types), plus specific safeguards for prompts and outputs when workflows require it.

via: zscaler

Scroll to Top