Palo Alto Networks and Google Cloud have announced an expansion of their strategic partnership with a clear goal: enabling companies to accelerate their cloud and artificial intelligence projects — including agentic AI, capable of acting more autonomously — without turning each deployment into a new source of risk. The move comes at a time when security is shifting from being “a final step” to a prerequisite: building it in with controls first, then scaling.
The announcement combines two elements that, by 2026, are already inseparable in many organizations: AI infrastructure (where models are trained and run) and the security surround that covers it (from code and supply chain to production execution). On paper, the partnership relies on integrating Prisma AIRS, Palo Alto Networks’ AI security platform, with Google Cloud’s AI-focused services such as Vertex AI and Agent Engine, to support everything from development to real-world operation.
What this means for clients: “end-to-end” security and reduced operational friction
The central promise is to narrow the typical gap between “what the product team creates” and “what the security team can oversee.” Specifically, companies are proposing a code-to-cloud approach: protecting AI data and workloads in Google Cloud, while also securing key development tools like the Agent Development Kit (ADK).
Practically, this package translates into capabilities targeting the most common failure points in AI projects:
- AI posture management for visibility and configuration control.
- Runtime security to detect and stop misuse in production.
- Agent security (because an agent is not just “another app”; it also makes decisions and executes actions).
- Red teaming for proactive testing.
- Model security for vulnerability scanning and assessment.
The message aligns with a fact that Palo Alto Networks highlights: according to their State of Cloud Report (published in December 2025), 99% of respondents experienced at least one attack against their AI infrastructure in the past year. This illustrates how the race to deploy AI has expanded the attack surface significantly.
Beyond AI: firewalls, SASE, and Google’s “fabric”
The partnership extends beyond model protection. It also includes a classic — yet crucial — aspect of “how traffic is connected and inspected” in hybrid and multicloud environments:
- VM-Series (software firewall) with deeper integrations to maintain consistent security policies as organizations migrate or expand within Google Cloud.
- Enhancement of the SASE approach with Prisma SASE and Prisma Access running over Google’s network, aiming to improve user experience and keep security controls in place when access is distributed across offices, remote workers, and cloud apps.
Ultimately, it’s a bet on platformization: fewer siloed components, more factory-validated integration, and a unified console to reduce operational costs often inflated when each layer is purchased and integrated separately.
Economic highlights: a multi-year deal approaching $10 billion
The announcement’s market significance also lies in its economic scale. Several outlets, citing Reuters, have reported that the multi-year agreement could total around $10 billion over its duration, though no official public figures have been disclosed by the companies in the statement.
Additionally, the partnership has a strategic component: Palo Alto Networks will migrate key internal workloads to Google Cloud as part of the deal, and the use of Vertex AI and Gemini models to power copilots is also mentioned. This “buy what I also use” approach is often seen as a sign of deep alignment: not just joint selling but operating on the same technological foundation.
Why now: agentic AI demands verifiable trust
Agentic AI changes the nature of risks involved. If a model generates text, the damage tends to come from the response itself; if an agent can act—creating tickets, changing configurations, making purchases, moving data—the problem resembles giving credentials to a “new user,” but one that is software.
That’s why both companies emphasize “reducing friction” between development and security, and building native controls throughout the AI lifecycle. The implicit goal: move beyond boardroom questions like “Can we deploy AI without compromising security?” towards “Which use cases do we prioritize, knowing we can operate safely?”
via: paloalto networks

