The expansion of artificial intelligence in the business world is creating a new type of risk: AI agents as digital insider threats. To address this challenge, Exabeam announced at the Google Cloud Security Innovation Forum the integration of Google Agentspace and telemetry from Google Cloud Model Armor into its New-Scale Security Operations Platform.
This advancement enables security teams to monitor, detect, and respond to threats originating from autonomous agents, providing visibility into their behavior, intent, and potential deviations, an area where legacy tools fall short.
A new class of internal threat: AI agents
An Exabeam study titled “From Human to Hybrid: How AI and the Analytics Gap are Fueling Insider Risk” reveals that 93% of organizations have already experienced or expect an increase in AI-driven insider threats. Additionally, 64% consider insiders more concerning than external attackers.
As AI agents access sensitive data and make independent decisions, risks similar to traditional insiders emerge: malicious, negligent, or compromised. In the case of AI, the risks translate into agents that are misaligned, flawed, or directly subverted.
How the integration works
The key to this innovation lies in the ability of Exabeam Nova, the platform’s intelligence layer, to analyze in real-time the behavior patterns and intent of AI agents.
- Google Agentspace provides telemetry on the activity of autonomous agents.
- Model Armor from Google Cloud adds security insights into how models handle data and decisions.
- Exabeam New-Scale Platform consolidates this information and generates explainable and prioritized alerts.
The result is that security analysts can go beyond simple suspicious events and understand whether an agent’s action is legitimate or could be abused.
Statements from key players
- Steve Wilson, Chief AI and Product Officer at Exabeam: “This isn’t about adding another tool, but about deepening visibility into human and AI agent behaviors from a trusted platform.”
- Chris O’Malley, CEO of Exabeam: “We are at a pivotal moment for cybersecurity. Extending our behavioral analytics to AI agents puts us back at the forefront of insider threat detection.”
- Vineet Bhan, Director of Security and Identity Partnerships at Google Cloud: “Integration with Exabeam offers clients advanced tools to protect data and maintain control while adopting AI with confidence.”
The challenge for SOCs in the hybrid era
Security Operations Centers (SOCs) face a dual challenge: defending against human users and now also against digital autonomous agents. With this update, Exabeam proposes a benchmark in behavioral analytics applied to AI, ensuring companies can adopt these technologies without losing control or trust.
This integration complements Exabeam’s focus on automation and intelligent detection, reinforcing its position as a key provider in SIEM and XDR with capabilities tailored to the rise of AI.
Conclusion
The partnership between Exabeam and Google Cloud reflects a profound shift in corporate cybersecurity: it’s no longer enough to monitor human factors—now it’s also essential to monitor AI agents. With this move, the company aims to give SOCs the clarity and context needed to protect operations in a world where humans and intelligent agents work side by side.
Frequently Asked Questions (FAQ)
What are digital insider threats from AI agents?
Risks arising from autonomous agents accessing data or making decisions outside traditional controls, potentially being misconfigured, misaligned, or compromised.
Why is telemetry from Google Agentspace and Model Armor important?
Because it provides direct insights into how agents act and how AI models handle decisions and data—key for detecting deviations.
What does Exabeam Nova contribute in this context?
Exabeam Nova analyzes in real-time the intent and behavior of agents, offering explainable, prioritized insights to reduce false positives and accelerate response.
Does this replace existing SIEM and XDR solutions?
No. It integrates with them, expanding their capabilities by including visibility into AI agents’ behavior alongside that of human users.
via: exabeam