The accelerated adoption of Artificial Intelligence in business environments is opening a new era of cloud risk. It’s not just about increased workload, more computing, or more data: it’s about more entry points, more dependencies, and more automation… also for attackers. This is the overview painted by the State of Cloud Security Report 2025 from Palo Alto Networks, highlighting a “massive” expansion of the attack surface driven by AI and a growing gap between deployment speed and actual security capacity for what goes into production.
The most striking data from the report is also the most uncomfortable: 99% of surveyed organizations have experienced at least one attack on AI applications and services in the past year. The conclusion is clear: AI systems have shifted from experimental “side projects” to a top target, especially as they become integrated into critical processes and connect with APIs, identities, and automation flows within the cloud.
When AI goes into production, the cloud becomes the battlefield
The report links the issue to a clear business reality: as cloud infrastructure grows to host AI workloads, that infrastructure itself becomes a target. And not just because of the value of the data, but due to the operational complexity: models, pipelines, repositories, permissions, endpoints, and services multiply to sustain generative AI and increasingly, “agentic” AI that performs actions.
Within this context, the report emphasizes a phenomenon directly impacting security teams: the rise of GenAI-assisted “vibe coding”, meaning AI-assisted development that speeds up code production but also increases the volume of releases and the risk of insecure software slipping through. According to the data shared, 99% of respondents use this approach, but the pace of code generation often surpasses review and remediation capacity.
This creates a dangerous equation. Of teams that release code weekly (52%), only 18% are confident they can fix vulnerabilities at the same pace. In security, that difference is not a trivial detail: it’s a debt accumulator. What isn’t fixed promptly stays in production, integrates with other services, and ends up expanding the attack surface “layer by layer”.
APIs, identities, and lateral movement: the new risk triangle
The report points to a tactical shift: attackers are pivoting toward core cloud layers, with a particular focus on API infrastructure, identity, and lateral movement.
- APIs at the center of the target. The report notes a 41% increase in attacks on APIs. The logic is simple: agentic AI and many modern architectures rely on APIs to operate, integrate services, query data, or perform actions. More APIs typically mean more endpoints, permissions, and opportunities for abuse if there’s no inventory, robust authentication, and fine access control.
- Identity remains the weakest link. A 53% of respondents recognize that overly permissive IAM (identity and access management) practices are among their top challenges. In practice, this translates to excessive permissions, reused credentials, overly broad roles, or poorly governed machine identities. When AI is involved, the problem scales: agents, services, and pipelines require permissions… and any excess becomes a quick path to credential theft or exfiltration.
- Lateral movement persists. A 28% cite unrestricted network access between cloud workloads as an increasing threat. This classic pattern sees a “small” intrusion escalate into a major incident when attackers can move freely within the environment. In the cloud, where inter-service connectivity often offers operational advantages, the risk is that it becomes a highway for adversaries.
Too many tools, too little speed: unifying cloud and SOC under pressure
Another recurring point in the report is the real cost of fragmentation. It argues that multi-vendor complexity and tool dispersion create blind spots and slow response times.
An example difficult to ignore: organizations report managing an average of 17 cloud security tools from five vendors, fragmenting context and complicating correlation. When the context breaks down, response times stretch: the report states that 30% of teams take more than a day to resolve an incident.
In this scenario, operational consensus emerges: 97% are prioritizing consolidating their security footprint, and 89% believe cloud security and application security should be fully integrated with the SOC for effectiveness. Not just a slogan, but a response to reality: if an adversary operates at machine speed, defenders can’t rely on manual workflows and silos of tools.
The accompanying message emphasizes that dashboards showing risks are no longer enough if those risks cannot be mitigated quickly. As a Palo Alto Networks executive put it, teams need more than visibility: they must operate faster than the attacker.
The 2026 challenge: end-to-end security and fundamental discipline
Beyond the company’s commercial focus, the report delivers a key message for any organization: AI not only adds capabilities; it also increases the attack surface. This surface expands on three fronts simultaneously: code (faster and more frequent), cloud (more services and interconnections), and operations (more alerts and greater automation dependence).
The critical point is that many organizations are adopting AI before closing the basics: API inventory, permission controls, segmentation, governance of non-human identities, and a remediation cycle that keeps pace with deployment. If AI drives businesses to move faster, security can’t be a brake: it needs to be rearchitected to keep up at the same speed.
Frequently Asked Questions
What does “attack surface expansion” by AI in the cloud mean?
It means deploying AI systems multiplies APIs, identities, services, and connections between workloads, creating more potential entry or abuse points if not properly controlled.
Why are attacks on APIs increasing in agentic AI environments?
Because AI agents and services depend on APIs to query data and perform actions. An incomplete inventory, weak authentication, or excessive permissions turn APIs into a top target.
What IAM issues are most dangerous when deploying AI in the cloud?
Permissive practices such as overly broad roles, mismanaged credentials, lack of least privilege, and poor governance over machine identities (service accounts, tokens, keys) in AI pipelines.
How does “vibe coding” with AI impact software security?
It accelerates code generation and deployment but can introduce flaws or insecure configurations faster than teams can review and fix them, increasing security debt in production.

