Artificial intelligence (AI) is no longer just a tool to boost business productivity; it has also become a cutting-edge weapon for cybercriminals. This is revealed in the CrowdStrike Threat Hunting Report 2025, presented at Black Hat USA 2025 in Las Vegas, which warns of a profound shift in the cybersecurity landscape: attackers are leveraging generative AI to scale their operations and have set their sights on an emerging and critical target—autonomous AI agents.
Generative AI as a driver of faster, more sophisticated attacks
The report, based on tracking over 265 threat groups worldwide, shows how generative AI (GenAI) is transforming cybercrime. Groups such as North Korea’s FAMOUS CHOLLIMA have automated entire attack cycles: from creating fake résumés and deepfake interviews to executing technical tasks with fabricated identities. The result is more persistent and scalable operations than traditional insider threats.
Other nation-states are following suit. EMBER BEAR, linked to Russia, has enhanced digital propaganda with AI, while Iran’s CHARMING KITTEN has developed more convincing phishing campaigns using advanced language models.
AI agents in the crosshairs
CrowdStrike highlights that tools and platforms for creating autonomous agents—capable of making decisions and executing tasks without constant supervision—are being compromised to steal credentials, install malware, or even deploy ransomware. Because these agents are deeply integrated into critical business processes, they have become high-value targets.
Adam Meyers, head of adversary operations at CrowdStrike, sums it up:
“Each AI agent is like a superhuman identity within the company: autonomous, fast, and deeply connected. Attackers treat them the same way as any key infrastructure, like SaaS platforms or cloud environments.”
AI-created malware is already here
What once seemed like a laboratory experiment is now a reality on the ground. Criminals and hacktivists with limited technical knowledge are using AI to generate scripts, troubleshoot, and produce functional malicious code. Examples such as Funklocker and SparkCat demonstrate that AI-generated malware has moved from a hypothetical threat to an operational risk.
Faster and more aggressive attacks
The group SCATTERED SPIDER has reemerged with a quicker, more destructive approach. Through fraudulent calls and technical support impersonation, they reset credentials, bypass multi-factor authentication, and move laterally within SaaS and cloud environments. In one documented incident, they went from initial intrusion to encrypting systems in less than 24 hours.
China drives the increase in cloud attacks
Cloud environment intrusions grew 136% last year, with Chinese groups responsible for 40% of this increase. Teams such as GENESIS PANDA and MURKY PANDA have exploited misconfigurations and trusted access to operate undetected, highlighting significant security management failures in many organizations.
An urgent cybersecurity challenge
The report delivers a clear message: protecting AI infrastructure is as urgent as safeguarding servers, networks, or critical applications. Autonomous agents and AI models are not only business tools but also assets that, if compromised, can become weapons against the organization itself.
CrowdStrike emphasizes that companies must adopt security strategies that encompass data and credential protection, continuous monitoring, and auditing of AI models and agents to detect anomalous behavior before damage occurs. In a scenario where attacks can escalate from infiltration to irreversible damage within hours, rapid response and recovery capabilities are more crucial than ever.
Frequently Asked Questions (FAQ)
1. What is generative AI and why does it concern cybersecurity experts?
It’s a type of AI capable of creating original content (text, images, code), which attackers use to generate more convincing lures, automate tasks, and scale attacks.
2. Why are autonomous AI agents so attractive to cybercriminals?
Because they autonomously control critical processes and are integrated across multiple areas of a company, providing a direct route to sensitive data and systems.
3. Is AI-created malware really a current threat?
Yes. Documented cases exist of malicious code generated with AI used in actual attacks.
4. What can organizations do to protect their AI infrastructure?
Implement strong authentication, segment environments, continuously monitor activity, and audit both models and agents to detect anomalous behaviors early.
Source: Security News