Artificial Intelligence applied to cybersecurity has just entered a new phase. It’s no longer just about assistants that write rules, summarize incidents, or help interpret alerts. Now, major AI labs aim to offer models specifically tuned for defensive work, with less friction for sensitive tasks and capabilities that border on areas traditionally reserved for specialized security providers. This is the underlying interpretation behind the launch of GPT-5.4-Cyber by OpenAI and the restricted deployment of Claude Mythos Preview within Project Glasswing by Anthropic.
On April 14, OpenAI announced the expansion of its Trusted Access for Cyber (TAC) program to thousands of verified individual defenders and hundreds of teams, along with a variant of GPT-5.4 trained to be more permissive for legitimate cybersecurity uses. Meanwhile, Anthropic introduced on April 7 a closed, invitation-only access model, described as especially capable in offensive and defensive security tasks, to the extent that it is not open to the general public. Both companies are entering the same market, but with distinctly different deployment strategies.
OpenAI wants verified access at scale; Anthropic prefers a much narrower perimeter
For OpenAI, the message is clear: they do not want to reserve advanced cyber defense to a very small group. GPT-5.4-Cyber will initially deploy to security providers, organizations, and verified research groups, with TAC becoming the trusted infrastructure that determines who can access more permissive levels of the model. OpenAI argues that cybersecurity risk depends not only on the model but also on the user, trust signals, and the operating environment. This idea justifies a broader access strategy, supported by identity verification and additional controls.
Anthropic has taken almost the opposite approach. Claude Mythos Preview remains within a closed research preview, organized through Project Glasswing—a collaborative initiative involving companies like Apple, Google, Microsoft, NVIDIA, Palo Alto Networks, and CrowdStrike to secure critical software. Practically, Anthropic presents Mythos as a capability too delicate for broad opening, at least for now. It has a more exceptional tone, closer to a high-sensitivity access program than a product intended to rapidly scale among security teams of all sizes.
For a tech publication, this difference is fundamental. OpenAI appears to aim for a broad market of verified defenders. Anthropic, on the other hand, manages a frontier capability whose deployment must remain carefully controlled. Neither position eliminates the sector’s central issue: the more useful a model is for finding and fixing vulnerabilities, the more valuable it could also be for those seeking to exploit them.
What this means for CrowdStrike, Palo Alto, Microsoft, and the rest of the industry
The main business question isn’t whether OpenAI and Anthropic can assist security teams—that seems clear already. The real question is how much they threaten cybersecurity companies that rely on selling platforms, assisted analysts, detection, response, and managed services.
In the short term, the answer doesn’t seem to be outright replacement. Both CrowdStrike and Palo Alto Networks and Microsoft have been pushing their own layers of AI applied to security operations for some time. CrowdStrike promotes Charlotte AI as an analyst assistant capable of automating triage, reducing false positives, and coordinating agents and humans within Falcon. Palo Alto Networks is advancing Prisma AIRS 3.0 as a platform to provide visibility, governance, and security across the entire lifecycle of AI and autonomous agents. Microsoft offers Security Copilot as an AI solution for incident response, hunting, intelligence, and posture management, supported by over 100 billion signals daily within their ecosystem.
This indicates that major vendors aren’t seeing this move from outside; they are already involved. In fact, CrowdStrike and Palo Alto’s roles as launch partners in Glasswing suggest that Anthropic isn’t aiming to eliminate these players but to collaborate with them in the most sensitive segment of the market. Similarly, OpenAI’s TAC is designed to give access—among others—to providers and defensive teams, not to replace them outright.
However, this doesn’t mean there’s no risk to their businesses; there are probably three key areas of concern.
The first is the commoditization of low- and medium-value work. If models like GPT-5.4-Cyber or Mythos substantially reduce the cost of tasks such as initial binary reviews, preliminary vulnerability analysis, reverse engineering assistance, or hypothesis generation, some of the value captured today by specialized tools or services could be compressed. This pressure is especially relevant for vendors offering limited automation, undifferentiated AppSec products, or services where a significant portion of work remains manual and repetitive. This is a reasonable inference based on the capabilities OpenAI and Anthropic market for their models and the types of tasks vendors already automate with their copilots.
The second concern is partial disintermediation. If a major AI lab can offer cyber-permissive models with verified access and sufficiently usable APIs, some organizations might question whether they need to buy certain analysis layers from security providers or prefer to integrate models directly into their own SOC, AppSec pipeline, or tooling. This wouldn’t be a full replacement, as data, telemetry, rules, correlation, and operational integration would still be missing, but it could erode margins in some segments of security software that rely more on analysis than environment control. This idea is supported by the fact that OpenAI is expanding TAC, and Anthropic is already offering Mythos to a selected set of actors.
The third is competitive pressure on product messaging. Over the past two years, many security firms have promoted their primary differentiation in AI as combining models with proprietary data, exclusive signals, and integrated workflows. While partly true, the entrance of OpenAI and Anthropic makes it harder to argue that the “intelligence” layer can only come from traditional vendors. If top cybersecurity models begin to exist outside security companies, which then have to integrate or adapt to them, their narrative shifts from “we lead security AI” to “we package it better than anyone.” Although subtle, this shift has significant commercial implications.
The business landscape shifts, but the core persists
Nonetheless, risks for vendors shouldn’t be overstated. Neither OpenAI nor Anthropic currently holds the platform position that CrowdStrike, Microsoft, Palo Alto, or Google have in a SOC day-to-day. The major security providers still control something much harder to replicate than models: telemetry, product integration, response orchestration, compliance, managed services, and operational client relationships. OpenAI may offer a better model for specific tasks. Anthropic might provide a more powerful and restricted capability. But neither alone replaces an EDR, SIEM, XDR, MDR, or incident response operation. This conclusion is supported by the vendors’ own public offerings, which remain focused on platform, data, and integrated automation, not just on the models.
Therefore, the most likely scenario isn’t that major AI labs will destroy cybersecurity business but that they will induce a new redistribution of value. The more generalist, repetitive, or “copilot” driven parts will be more exposed. The segments combining AI with telemetry, enforcement, networking, identity, and response will continue to matter significantly. In this sense, OpenAI and Anthropic aren’t killing security vendors; they’re pushing them to demonstrate that their true value isn’t just deploying a chatbot over the SOC but integrating AI into the layer where real decisions are made.
Frequently Asked Questions
What’s the difference between GPT-5.4-Cyber and Claude Mythos Preview?
GPT-5.4-Cyber is part of a broader verified access strategy via TAC, which OpenAI aims to extend to thousands of verified defenders. Claude Mythos Preview remains a closed, invitation-only research preview within Project Glasswing.
Do these models threaten companies like CrowdStrike or Palo Alto Networks?
In the short term, they’re more likely to threaten certain aspects of their business—basic automation, repetitive analysis, or undifferentiated tools. But major vendors still control telemetry, integration, orchestration, and operational response.
Why do CrowdStrike and Palo Alto also appear in Project Glasswing?
Because Anthropic launched Glasswing with partners including CrowdStrike and Palo Alto Networks, indicating that these large security vendors want to participate in these advanced capabilities rather than stay out.
Can a company use these models directly and skip parts of its security stack?
In some specific workflows or analyses, integrating more capable models could reduce dependence on certain tools, but today, they don’t replace full security platforms, MDR, EDR, SIEM, or incident response operations.

