Gigamon warns of a critical rise in AI-driven threats, jeopardizing cloud environments and multiplying security breaches
The proliferation of AI-driven cyberattacks is forcing companies to reconsider their infrastructure strategies, especially in hybrid cloud environments. This is revealed in the latest global study by Gigamon, which reports a 17% increase in security breaches compared to the previous year and an unprecedented volume of cyber threats.
According to the report, 55% of organizations have experienced breaches in the past year, with ransomware and attacks on language models (LLMs) as the main perpetrators. 58% of security professionals have detected an increase in AI-driven ransomware, and 47% have been targeted by direct attacks on their generative AI implementations.
“We are seeing how AI is not only transforming business but also the attackers. Threats are faster, more adaptive, and harder to detect with traditional tools,” says Chaim Mazal, Chief Security Officer at Gigamon.
Public cloud under pressure: data repatriation accelerates
In a notable strategic shift, 70% of IT and security leaders now believe that the public cloud represents the greatest security risk above any other environment. This perception has led the same percentage of organizations to consider repatriating data to private or hybrid environments.
Additionally, 54% of respondents are hesitant to deploy AI workloads in the public cloud due to fears of exposing intellectual property. A lack of visibility and quality data for secure AI implementation is cited as one of the main obstacles.
Limited visibility and outdated tools
Nearly two in three organizations (64%) prioritize achieving full real-time visibility over all data in motion this year. Currently, 55% do not trust that their existing security tools can effectively detect breaches, with a lack of visibility being the primary impediment.
This issue is exacerbated in a context where network traffic volumes have doubled due to intensive use of AI workloads, according to 33% of the leaders surveyed.
Deep observability: the new standard for cloud security
The concept of “deep observability” is solidifying as a key strategy to tackle this new reality. This approach combines traditional telemetry (logs and metrics) with contextual information extracted directly from the network. The goal: detect threats within encrypted traffic, monitor lateral movements between workloads, and anticipate complex attacks before they escalate.
“89% of leaders already consider it an essential pillar, and 83% say this strategy is actively discussed in their boardrooms,” highlights Mazal.
This trend is also backed by industry experts. Mark Walmsley, CISO of Freshfields, summarizes it this way:
“The key to staying ahead is visibility. Only if we can clearly see what is happening in AI systems and data flows can we effectively manage risk. Deep observability gives us the tools to detect vulnerabilities before it’s too late.”
A global threat: study statistics
The Gigamon study was conducted after interviewing more than 1,000 global security and IT leaders in key regions such as the United States, the United Kingdom, Germany, France, Australia, and Singapore. Among its most significant findings:
- 91% admit to making security trade-offs to manage their hybrid infrastructure.
- 46% lack clean and reliable data for secure AI implementation.
- 47% do not have full visibility of east-west traffic in their environments.
- 88% deem observability critical for protecting AI implementations.
Conclusion
The era of AI demands a complete redesign of cloud security models. What was once considered sufficient—segmented tools, static rules, and superficial monitoring—has become obsolete in the face of dynamic and contextual cyberattacks. To survive and thrive in this new environment, organizations must invest in secure hybrid infrastructures, total visibility, and adaptive protection models.
The hybrid cloud is no longer just a technological option: it’s a matter of business resilience.
via: helpnetsecurity