Palo Alto Networks and Google Cloud strengthen their partnership to safeguard the new wave of AI-powered applications

The race to deploy agentic artificial intelligence (AI)—capable of executing tasks autonomously and chaining actions across real systems—is opening up a new frontier: security. In this context, Palo Alto Networks and Google Cloud have announced a significant expansion of their strategic partnership with a clear goal: enabling companies to develop and operate AI workloads in the cloud with built-in security controls “from code to cloud,” rather than as an afterthought.

This announcement is supported by a fact that many organizations already see as a point of no return. Palo Alto Networks’ annual “State of Cloud Security Report 2025” states that 99% of surveyed organizations have experienced at least one attack against their AI systems in the past year. In other words: AI adoption is accelerating, but so is the attack surface and incident frequency.

From AI Promise to Operational Reality: Security, Friction, and Boardroom Pressure

The expansion of the alliance addresses a question that, according to the leaders involved, is already being raised in boardrooms: how to capture the value of AI without turning it into a systemic risk. From Palo Alto Networks, President BJ Jenkins frames the challenge as a friction between development and security, arguing that protection must be a native part of application development. From Google Cloud, President and CRO Matt Renner describes the initiative as a way to ensure that joint clients can safeguard critical AI infrastructures and develop agents with security “from the start.”

The background is both technical and cultural. The same Palo Alto Networks report warns of the rise of AI-assisted development (“vibe coding”) and highlights the difficulty security teams face in reviewing, prioritizing, and fixing vulnerabilities at the pace new software is deployed. The practical consequence is familiar to any CISO: faster development without equivalent controls often leads to increased exposure.

Prisma AIRS as the Backbone: “End-to-End Security” for AI Workloads

The core of the agreement revolves around Prisma AIRS, Palo Alto Networks’ AI security platform, which will be integrated to protect workloads and data on native Google Cloud AI services, including Vertex AI and Agent Engine.

The proposal is described as “code-to-cloud” coverage, aiming to encompass multiple layers in practice:

  • AI Posture Management: visibility and security posture assessment in AI environments (configurations, exposure, controls).
  • AI Runtime Security: real-time defense during AI workload execution.
  • AI Agent Security: specific controls for autonomous systems (agents) that interact with tools, APIs, and business workflows.
  • AI Red Teaming: proactive testing to identify weaknesses before exploitation.
  • AI Model Security: vulnerability scanning and risk assessment related to models.

This focus is critical: in agent-based environments, failures are not just about “a compromised server,” but can involve automated actions accessing sensitive data, executing operations on internal systems, or amplifying errors at great speed. Therefore, the message being reinforced is that security must accompany the agent from its design, training/configuration, to deployment.

Software Firewalls and SASE: Securing the Perimeter Where There Is No Longer a Perimeter

The expansion also incorporates traditional—but recontextualized—cloud security components:

  1. Next-Generation Cloud Firewall (SWFW)
    Palo Alto Networks’ VM-Series firewalls are positioned to enhance cloud and virtualized environments with deep traffic inspection and threat prevention, with tighter integrations into Google Cloud to maintain consistent policies and accelerate adoption without compromising control.
  2. AI-Powered SASE Platform
    With Prisma SASE and Prisma Access running over Google’s network, the promise is twofold: improving user experience (performance and access) and enforcing consistent policies as employees, locations, and devices connect to cloud-based AI applications and services. Use of Google Cloud Interconnect for multi-cloud WAN infrastructure with security consistency is also mentioned.

Overall, the approach seeks to address a recurring operational challenge: each new layer (AI, multi-cloud, agents, APIs) often introduces new consoles, rules, and blind spots. The announcement emphasizes a “simplified and unified experience,” with “validated” solutions to reduce integration friction.

A Partnership with Commercial Traction and Internal Commitments

Beyond the technical scope, the announcement signals maturity in the relationship: both companies highlight more than 75 joint integrations and $2 billion in sales via Google Cloud Marketplace. Additionally, Palo Alto Networks states it will extend its commitment to deploying security platforms on Google Cloud infrastructure, including migrating key internal workloads as part of a “multi-billion dollar” deal. The company also notes it already utilizes Vertex AI and Gemini models to power its copilots.

From an industry perspective, this move aims to consolidate a pattern: if AI deployment becomes fundamental to critical processes, security cannot be an afterthought or reliant on patches. Given that nearly all organizations report AI-related attacks, the market will likely demand more structural guarantees: native controls, integrated telemetry, and response capabilities suitable for a “machine-speed” world.


FAQs

What is Prisma AIRS and how does it serve AI projects?

It’s a security platform designed to protect the AI solution lifecycle, including posture controls, runtime defense, agent-specific security, red teaming tests, and model risk analysis.

What does “code-to-cloud security” mean when deploying AI agents?

It involves integrating security from development (code, tools, pipelines) through to production operation (workloads, data, APIs, agent behavior), avoiding reliance solely on perimeter controls or post-deployment reviews.

Why do AI agents require different security measures than traditional applications?

Because an agent can act autonomously, chain decisions, invoke tools, and operate with credentials and data. A failure could lead to unintended actions on a large scale, not just a localized breach.

What should companies review before deploying AI workloads in the cloud?

At minimum: security posture (configurations and exposure), identity and permission controls, runtime protections, API security, action logging and traceability, and proactive testing (red teaming) before production.

via: Palo Alto Networks

Scroll to Top