Gartner Alert: Agentic AI Will Trigger Security Incidents

The adoption of generative AI applications within companies is entering a new phase. It’s no longer just about assistants that write texts or summarize meetings, but about agents that consult internal data, call external tools, and make decisions within real business workflows. This is where much more serious risks start to emerge. Gartner predicts that by 2028, 25% of all enterprise-generative AI applications will experience at least five minor security incidents per year, compared to the 9% reported in 2025. The firm also adds that by 2029, 15% of these applications will have at least one serious incident annually, up from just 3% in 2025.

This warning is not coincidental. Gartner directly links this increased risk to the growth of agentic AI and the use of technologies such as Model Context Protocol (MCP), an open standard designed to connect models with tools, data, and external systems. According to the consultancy, the problem is that MCP was designed prioritizing interoperability, ease of use, and flexibility, rather than default security. This means many errors don’t appear in extreme scenarios but emerge during everyday use.

In a short period, MCP has become a central component of the new agent ecosystem. Anthropic introduced it in 2024 as an open standard for building bidirectional connections between data sources and AI-based tools, and OpenAI already supports MCP in parts of its developer stack, openly treating it as a protocol becoming a de facto standard. As this model spreads, attack surfaces grow: more MCP servers, third-party connectors, agents with permissions, and greater chances that misconfiguration could lead to data leaks, privilege escalations, or dangerous automation.

The risk is not in theory but in how agents connect

Gartner’s warning makes sense if you consider how these applications operate. A modern agentic app can read internal documentation, query a CRM, execute actions within a ticketing system, summarize content from an external website, and send results to another service. Each step may seem reasonable alone. The issue arises when these steps combine into a single workflow involving sensitive data access, ingestion of unreliable content, and external communication capabilities. Gartner describes this combination as a “no-go zone” or “prohibited area” due to its high exfiltration risk.

This is not an abstract concern. MCP’s official security documentation acknowledges that the protocol introduces specific risks and urges implementers to design robust consent and authorization workflows, apply appropriate access controls, clearly document security implications, and consider privacy by design. In short, neither the protocol nor the standard will protect the company by themselves if they connect agents to sensitive resources without a strong governance layer.

Adding to this is a second layer of risk: third-party components. Gartner warns about hidden vulnerabilities in MCP servers, libraries, and widely reused connectors. Here lies a classic security lesson that AI is bringing back into prominence: when a technology expands rapidly, the ecosystem of integrations usually outpaces mature review practices. This pattern is typical of any new platform, but now the cost of a mistake can be much higher because the agent not only views data but can also act upon it.

Prompt injection, supply chain, and poorly designed permissions

If there’s a threat that encapsulates this moment well, it’s prompt injection. OWASP ranks it among the main risks in LLM-based applications and defines it as manipulating the model’s responses through input designed to alter its behavior, including bypassing security measures. In an agentic system, such manipulation can go beyond a wrong answer; it can become an executed command, leaked data, or an action performed with unintended authorization.

For this reason, Gartner emphasizes that companies should not just inherit permissions meant for human users and apply them directly to agents as if nothing. The recommendation is to create authentication and authorization schemes specific to agents, with very limited privileges and formal reviews for each use case. Also, known mitigations against threats such as content injection, exposure of sensitive data, supply chain attacks, and privilege escalations—especially when the model “tries to help” but ends up doing something it shouldn’t—should be reinforced.

An additional relevant point from Gartner’s analysis is that complexity will increasingly shift to governance. As agents are deployed across more business domains, managing data access, regulatory compliance, and operational responsibilities will become more challenging. Therefore, the firm recommends that domain experts define usage rules and boundaries, with each MCP server clearly owned within the organization. Security reviews at the end are not enough; control must be integrated into the very design of the functional domain.

The goal is not to stop AI but to use it appropriately

Gartner’s forecast doesn’t suggest companies should abandon agentic AI; rather, it urges reducing unwarranted enthusiasm. In 2025 and 2026, many organizations have begun deploying agents driven by a mix of excitement, competitive pressure, and limited security discipline. The message now is quite clear: the more useful and connected a GenAI app is, the greater the need for formal reviews, well-defined boundaries, and a privilege-minimized architecture.

The core issue is that MCP and agents are bringing AI to the company’s operational heartbeat. No longer confined to experimental layers or laboratory demos, they now influence systems, data, and actual processes. This dramatically scales the problem. Minor incidents will likely increase first as a natural symptom of an immature ecosystem. The real challenge will be preventing this string of everyday mistakes from escalating into serious, costly, or regulatory failures.

Frequently Asked Questions

What exactly does Gartner predict regarding generative AI security in companies?
Gartner forecasts that by 2028, 25% of all enterprise GenAI applications will experience at least five minor security incidents annually, up from 9% in 2025. It also estimates that by 2029, 15% will have at least one serious incident per year, compared to the current 3%.

Why does MCP increase risk in agentic applications?
Because it enables interoperability among models, tools, and data without imposing default security measures. If an agent can read sensitive data, process unreliable content, and communicate externally in the same flow, the risk of data leakage and misuse rises significantly.

What is a minor incident in an enterprise AI application?
Gartner does not provide a fixed taxonomy here, but mentions data exposures, vulnerabilities in third-party components, permission errors, and failures related to patterns like content injection or excessive assistance from the model.

What does Gartner recommend to mitigate these risks?
Formalize security reviews for MCP use cases, prioritize low-risk scenarios, exclude dangerous combinations, develop authentication and authorization tailored to agents, and let domain experts define safeguards and ownership over MCP servers.

via: gartner

Scroll to Top