Cybersecurity enters 2026 with an increasingly difficult-to-hide contradiction: security teams trust more than ever in automation and Artificial Intelligence, but at the same time admit that organizational readiness is not keeping pace with threats.
This is the main snapshot provided by the 2026 State of Cybersecurity Report: Bridging the Divide from Ivanti, based on responses from more than 1,200 cybersecurity professionals worldwide. The report describes a gap that the company calls the “Cybersecurity Readiness Deficit”: a preparedness shortfall widening year after year, driven by the rise in attacks, the complexity of the SaaS environment, regulatory pressures, and above all, the rapid increase in AI use by attackers.
Agentic AI: “We want it to act alone”… with nuances
The headline generating the most conversation is clear: 87% of security teams say that adopting agentic AI — systems capable of making decisions and executing actions autonomously in real-time — is a top priority. Even more striking: 77% report having “some level of comfort” allowing these systems to act without prior human review.
The key nuance is that this confidence isn’t absolute. The report emphasizes that the market is entering a phase of “cautious acceptance”: many organizations want agents that investigate, correlate signals, and recommend actions, but not all are ready to delegate critical responses without controls. In fact, the document highlights that actual AI adoption in key functions remains uneven:
- 53% use it to enforce cloud security policies.
- 44% for incident response workflows.
- 43% for threat intelligence correlation.
- 42% for vulnerability response and remediation.
Meanwhile, nearly all respondents (92%) agree that automation reduces response times. The problem isn’t faith in automation itself — it’s the gap between the desire for “autopilot” and the operational maturity needed to sustain it.
Deepfakes: “The threat is here now,” but preparedness remains low
If there’s a field where the gap between threat and preparedness feels most tangible, the report points to deepfakes and synthetic content.
The data is compelling: 77% of organizations say they have been targeted by deepfake-based attacks. Over half (51%) report personalized phishing campaigns enhanced with synthetic content. However, when asked about preparedness, only 27% consider themselves “very prepared” against this type of threat, leaving a gap of over twenty points between perceived risk and actual capacity.
The report also brings the issue into the uncomfortable realm of executive understanding: only 30% of professionals believe their CEOs could “definitely” identify a deepfake. This means the attack vector is not just technological but also cultural and educational.
Ransomware, credentials, and APIs: the growing preparedness gap
Ivanti further analyzes the difference between what’s considered a “high/critical threat” and the percentage that feels “very prepared” to defend against it. In ransomware, for example, the gap is particularly sharp: 63% see it as a high or critical threat, but only 30% consider themselves very prepared.
This pattern repeats across areas that have been high on concern for years: compromised credentials, software vulnerabilities, supply chain risk, and API-related vulnerabilities. These aren’t new threats; what’s new is the speed: the report mentions how attackers are shortening the time between patch release and exploitation—a race accelerated by AI.
The IT-Security gap: when risk isn’t prioritized equally
Another key finding for understanding why closing the preparedness gap remains challenging isn’t just about technology; it’s about internal coordination.
Nearly half (48%) of security professionals think that IT teams do not respond urgently to cybersecurity concerns. And 40% believe that IT doesn’t understand their organization’s risk tolerance. This directly impacts disciplines like exposure management, where security depends on IT making changes, prioritizing patches, adjusting configurations, and accepting maintenance windows.
The result? Known risks accumulate, patches are delayed, measurements are poor, and executive committees receive indicators that reflect activity rather than risk.
Measuring isn’t managing: fragmented metrics with little business context
The report emphasizes a point long echoed by many CISOs: there are too many KPIs that “look good” but tell little about actual security posture.
Only 60% employ business impact analysis to prioritize risks. And although 51% use a “exposure score” or risk-based index, it’s still common to rely on process metrics such as:
- Mean time to remediate (47%)
- Percentage of exposures remediated (41%)
According to Ivanti, these metrics can improve even while the actual risk worsens. In other words, organizations may be “going fast”… in the wrong direction.
The human factor: stress, fatigue, and talent shortages as systemic vulnerabilities
The report also depicts a quiet crisis: burnout among teams.
43% of professionals report high stress levels, and 79% say it affects their physical or mental health. Specific effects include difficulty concentrating, sleep problems, and increased anxiety. Within this context, the talent shortage—or the mismatch between skills and actual needs—becomes a direct barrier to effective automation and agent deployment. Without capable profiles to oversee, evaluate, and control these systems, autonomy becomes an added risk.
So, what changes with agentic AI?
The takeaway from the report is that agentic AI is becoming “a priority” before reaching “mature capability”. And that’s where the danger lies: deploying agents as shortcuts to compensate for lack of time and staffing, but without quality data, guardrails, and meaningful metrics, only increases complexity—not security.
That’s why the document stresses concepts that, while less glamorous than “autonomy,” are essential for sustainable scaling: governance, traceability, risk control, and real training against synthetic deception.
Frequently Asked Questions
What is agentic AI in cybersecurity, and how does it differ from traditional automation?
While traditional automation executes predefined rules, agentic AI can decide and act more autonomously, adjusting its behavior based on the context (for example, investigating signals, correlating events, and taking actions).
Why have deepfakes become a “security” problem beyond reputation concerns?
Because they enable operational attacks: impersonation frauds, social engineering targeting executives, vishing, and highly personalized phishing. The impact is no longer just reputation—it’s money, access, and credentials.
What metrics are most helpful for practically prioritizing risks?
Metrics that connect exposure to business impact: risk-based scoring, impact analysis, asset criticality, exploitation probability, and operational dependence. Relying solely on remediator times or volume can mask real risk.
How can organizations start integrating AI agents without over-automating from day one?
A common approach is “human-in-the-loop”: agents investigate and propose actions, but a responsible person approves critical decisions; additionally, traceability (what the system saw, decided, and why) and scope limitations by domain or incident type are enforced.
via: ivanti

