A report from Orca Security warns of the increase in attack vectors due to the accelerated use of artificial intelligence in multicloud environments
As artificial intelligence integrates into business processes, new threats emerge. According to the latest 2025 Cloud Security State Report from Orca Security, 84% of organizations are already using AI packages in cloud environments, and concerningly, 62% incorporate at least one vulnerable package.
The study, conducted by the Orca Research Pod, analyzed billions of cloud assets across platforms such as AWS, Google Cloud, Azure, Oracle, and Alibaba. The conclusion? Innovation is becoming the Achilles’ heel of security.
“Adopting multicloud architectures offers great flexibility, but it also complicates maintaining uniform visibility. When you add to this the rapid deployment of insecure AI models, the result is an extremely complex environment for security teams,” stated Gil Geron, CEO and co-founder of Orca Security.
📊 Key Findings from the 2025 Report
Indicator | Percentage or Notable Data |
---|---|
Organizations using AI in the cloud | 84% |
Organizations with at least one vulnerable AI package | 62% |
Organizations with plain text secrets in their source code | 85% |
Public cloud assets with lateral movement potential | 76% |
Organizations with assets having over 100 attack paths | 36% |
Use of privileged accounts in Kubernetes | 93% |
🔥 Risks Beyond Runtime
One of the most alarming statistics is that 85% of the organizations analyzed store sensitive secrets like passwords, API keys, or tokens directly in their source code repositories. If these repositories are exposed, access to critical systems is within reach of any attacker with basic knowledge.
Such practices represent a structural risk, as they do not depend on the configuration of the cloud environment, but rather on unsafe habits within the development cycle itself.
🌐 The Domino Effect of Lateral Movement
One of the most commonly used strategies by attackers is lateral movement, which allows access to more resources once inside a system. Orca warns that 76% of organizations have at least one public asset that facilitates this technique, making any breach an exponential threat.
Moreover, the report reveals that more than one-third of organizations have cloud assets with over 100 possible attack paths, highlighting a lack of segmentation and privilege control.
🧠 Artificial Intelligence: Lever of Progress and Risk Vector
The adoption of AI has skyrocketed, and it has done so at a pace that outstrips security practices. In many cases, open-source packages are installed that still contain unpatched CVEs (Common Vulnerabilities and Exposures), some even capable of executing code remotely (RCE).
This means that an apparently harmless language model could become a Trojan horse within a company’s infrastructure if not managed properly.
🔍 Beyond Code: Non-Human Accounts
An emerging phenomenon is the proliferation of non-human identities—services, bots, automated scripts—in cloud environments. These identities often have broad permissions and few restrictions, making them ideal targets for attackers.
This uncontrolled growth of digital identities complicates visibility and control and poses new ethical and operational challenges for security teams.
🛡️ Expert Opinion
“The productivity of cloud development has grown immensely, but so have the risks. From exposed secrets to insecure AI packages, the attack surface continues to expand. Orca’s report is a wake-up call to reinforce security from code to the production environment,” notes Melinda Marks, Director of Cybersecurity at Enterprise Strategy Group.
🔧 Key Recommendations for Security Teams
Recommended Measure | Expected Impact |
---|---|
Continuous scanning for CVEs in AI packages | Reduces risk of remote execution or silent compromises |
Auditing secrets in repositories | Prevents exposure of critical credentials |
Control of non-human identities | Improves traceability and reduces attack paths |
Segmentation and Zero Trust policies | Limits lateral movement in case of breach |
Unified visibility in multicloud environments | Facilitates detection and prioritization of risks |
Secure AI training for developers | Promotes responsible coding practices from the start |
🧠 For Curious Minds: Why are AI and Cloud an Explosive Combination?
The cloud allows for rapid scaling, and AI needs large amounts of data and computational resources. But this combination also means that every poorly configured model, script, or container can become a new entry point.
The management of foundational models, MLOps tools, and AI libraries is still maturing in terms of cybersecurity. In this context, vigilance and automation are key.
🧩 Conclusion
Innovation does not stop, but security cannot afford to either. Orca Security’s report is a snapshot of the challenge faced by thousands of organizations: to protect their growth without compromising their integrity.
In the words of Gil Geron:
“The time is now. If we want the cloud and artificial intelligence to be engines of transformation, they must also be trustworthy environments.”
The cloud is no longer just a place to host data; it’s the battlefield of 21st-century cybersecurity.
via: Orca security