Three out of four AI agent projects pose a serious security risk

A recent report from Palo Alto Networks, a leading cybersecurity company, warns that 75% of agentic artificial intelligence projects currently in development in Europe pose a significant security risk to organizations. The issue isn’t with the technology itself, but rather with the lack of proper governance, control, and oversight.

The study, based on more than 3,000 interviews with C-suite executives from European companies, shows that many AI initiatives are being implemented without a clear strategy, without defined objectives, and without established security criteria. This reflects a low level of involvement from board members. According to the company, unless urgent measures are taken by business leadership, current innovation could lead to a cybersecurity crisis in the future.

To prevent this and ensure that agentic AI creates real and sustainable value, Palo Alto Networks proposes three fundamental pillars to guide its responsible and secure adoption.

Pillar 1: Define and Limit Outcomes

The analysis identifies the main cause of failure as the phenomenon of “outcome deviation”, meaning AI projects launched without measurable business objectives or risk focus. According to Gartner, 40% of all projects will be canceled before 2027, while MIT estimates that 95% of enterprise generative AI programs are already failing. The situation is even more concerning from a cybersecurity perspective, as only 6% of organizations apply advanced security frameworks for AI (Stanford).

“The key difference now is that agentic AI doesn’t just provide answers; it takes actions aimed at specific results,” explains Haider Pasha, EMEA CISO at Palo Alto Networks. “Organizations that can identify suitable use cases and align them with business goals will be the ones truly able to extract value from this technology.”

The company recommends designing AI agent projects based on defined and board-approved objectives, aligned with security standards and corporate identity.

Pillar 2: Safety Barriers Built-In from the Start

The analysis emphasizes that trust in AI must be balanced with control, applying principles of Zero Trust and an identity-focused security approach. Common failures stem from agents with excessive privileges, weak identity controls, or undefined access limits.

Palo Alto Networks highlights that agent autonomy should be earned, not granted at the outset. Systems should be treated as digital employees, subject to increasing trust levels and continuous auditing. Therefore, the company recommends:

  • Applying the Zero Trust model throughout the architecture.
  • Separating privileges and controlling sub-agents.
  • Logging all actions and always keeping a human in the loop.
  • Adopting a comprehensive AI Security Framework, avoiding isolated solutions.

Currently, the human-to-agent ratio is around 80:1, a figure expected to rise rapidly in the coming years, pushing organizations to establish robust, scalable control frameworks.

Pillar 3: Cross-Functional Governance and Shared Responsibility

Finally, Palo Alto Networks stresses that AI agents should be approached as a cross-functional business initiative, not as an isolated IT project. To ensure success, the company recommends establishing a “AI Governance Council”, with representation from security, risk, legal, operations, and business areas, with regular reviews at the board level. Additionally, it urges boards to:

  • Supervise more strictly the integration and outcomes of projects.
  • Actively educate themselves on AI implications.
  • Form long-term partnerships with trusted technology partners.
  • Empower responsible teams to adopt AI in a controlled and secure manner.

This shared responsibility model enhances resilience, guarantees regulatory compliance, and ensures projects are aligned with strategic objectives.

Scroll to Top