Apple’s security has just entered a new phase. Not because its chips are “broken,” nor due to a public campaign against Mac users, but because a research firm claims to have demonstrated that artificial intelligence can drastically accelerate the development of advanced exploits against one of the company’s most ambitious defenses.
The case revolves around Memory Integrity Enforcement, known as MIE, Apple’s new protection against memory corruption attacks on their A19 and M5 chips. Calif, a firm specializing in offensive research and red teaming, asserts that they developed a functioning kernel exploit on macOS 26.4.1 running on actual Apple M5 hardware, with MIE enabled, in just five days. They reportedly used Claude Mythos Preview, an Anthropic cybersecurity-oriented AI model within Project Glasswing, to assist with their work.
The story has circulated with eye-catching figures, such as an alleged $2 billion investment by Apple in MIE or a $35,000 API call cost to build the exploit. However, caution is advised: Apple has not disclosed an official investment figure for MIE. They have explained that this defense results from years of combined hardware and software efforts and represents one of the largest security bets introduced on their consumer platforms.
What is MIE and why does Apple consider it so important?
Memory corruption has long been one of the most dangerous families of vulnerabilities in modern operating systems. Many sophisticated attacks on iOS and macOS have exploited memory errors to escape isolation, escalate privileges, or execute code in critical system areas.
Apple has been strengthening its platforms with defenses like Pointer Authentication Codes, kernel protections, Safari hardening, process isolation, and improvements in memory allocators. MIE takes a further step by combining hardware and software changes to detect misuse of memory and make it harder for a bug to become a practical exploit chain.
Apple itself introduced MIE as a defense designed to block many techniques used in real-world exploits. The company explained that internal tests had attempted to reproduce known public attack chains and failed to make them work against the new mitigation. This is a strong claim but does not promise invulnerability.
This nuance is key to understanding Calif’s work. According to the researchers, MIE functioned as intended. Their exploit was not a trivial bypass but an alternate route that managed to evade the defense model through a local privilege escalation chain. Calif mentions two chained vulnerabilities and an attack starting from an unprivileged user to gaining root access, though full technical details will be released only after Apple patches the flaws.
For a technical audience, this is the most interesting point: MIE does not seem to have failed in a simple way. Rather, it forced researchers to seek a different approach. That’s precisely what effective mitigations aim for: they don’t eliminate all errors but increase exploitation costs, reduce available techniques, and compel attackers to pursue more complex routes.
Mythos Preview and the changing economics of exploits
The element that makes this case more than just a story about Apple is the role of Mythos Preview. Anthropic launched Project Glasswing in April 2026 as an initiative to put advanced AI models into the hands of defenders, critical software maintainers, and security organizations. Their core argument: if models can find vulnerabilities, defensive teams should have early access before attackers do.
Claude Mythos Preview describes itself as a frontier model particularly strong in programming, reasoning about complex software, and cybersecurity tasks. Anthropic claims the model has already identified thousands of zero-day vulnerabilities in critical software during its evaluations and controlled deployments. The UK-based AI Security Institute also published an assessment in April highlighting significant advances in CTF-style testing and multi-step attack simulations.
Calif’s case fits into this scenario. AI does not replace expert researchers but changes their productivity. A team with deep knowledge of macOS, Apple Silicon mitigations, and kernel exploitation can use an advanced model to generate hypotheses, review attack paths, reason about flaws, automate parts of analysis, and speed up testing. The difference isn’t that “AI hacks on its own,” but that it shortens the time from idea to a working proof.
This shift has enormous consequences. Until now, developing a kernel exploit against a modern platform with active hardware mitigations was a job for highly specialized teams, often taking weeks or months of analysis. If a model can cut that cycle to days, the balance between discovery, exploitation, and patching changes significantly.
For defenders, the advantage is clear: early fault detection, better prioritization, and more intense testing of mitigations. For attackers, the promise is equally enticing. That’s why Anthropic emphasizes limited access and a defensive approach with Project Glasswing. The problem is that capabilities evolve rapidly. Other models will emerge, some more open, others potentially usable without the same controls.
Implications for Apple and the industry
Apple’s unique position in security stems from controlling silicon, the operating system, and much of the update distribution process. This integration enables deployment of mitigations like MIE in ways few manufacturers can match. It also allows for coordinated patching when serious vulnerabilities are found.
However, this same integration makes each new security advance a high-value target for researchers and exploit buyers. A reliable bypass against modern protections in iOS or macOS can fetch millions on private or gray markets, especially if it can be part of a full chain targeting high-value devices. Calif estimates that an exploit of this caliber could be valued between $5 million and $10 million, though this should be seen as an offensive market estimate rather than a confirmed price for this particular case.
The industry should take away a less sensational but more practical idea: hardware mitigations remain essential, but their evaluation cannot continue at traditional speeds. If AI models accelerate offensive research, companies must also speed up security testing, fuzzing, code review, private report responses, and patch deployment.
Similarly, pressure on bug bounty programs will grow. If small teams can find high-impact vulnerabilities at lower costs, vendors need to encourage responsible disclosure before these findings enter opaque markets. Rewards shouldn’t always compete with gray markets but should make legal and coordinated routes more reasonable.
For Mac users, the practical message is simple: there are no public signs that this exploit is circulating or being used against end users. Calif has reported the finding to Apple and is withholding technical details. The recommended actions remain: install security updates as they become available, limit the installation of unknown software, and maintain prudent permission policies.
The Mythos-M5 case is not the end of Apple’s security story. It signals where vulnerability research is heading. The best defenses will still matter, but they will be tested against adversaries aided by increasingly capable models. The next stage in cybersecurity isn’t just humans versus code, but human teams supported by AI, searching for routes that previously would have required far more time, money, and patience.
Frequently Asked Questions
What has Calif demonstrated about Apple M5?
Calif claims to have developed a local kernel exploit in macOS 26.4.1 on Apple M5 hardware with Memory Integrity Enforcement enabled. Full details will be published once Apple patches the vulnerabilities.
Has MIE been rendered useless?
No. According to Calif, MIE worked as designed. Their research points to an alternative exploitation pathway, not a complete breakdown of the technology.
What role did Claude Mythos Preview play?
Mythos Preview reportedly helped the team accelerate vulnerability discovery and exploit development. Nonetheless, the research still required advanced human expertise.
Should Mac users be worried?
There are no public signs of active exploitation. Nevertheless, it’s advisable to install Apple’s security updates promptly and avoid installing untrusted software.
via: OpenSecurity

