Pure Storage has taken a step forward at its Pure//Accelerate event with a series of announcements centered around a common theme: shifting from “storing” to “managing” data so that artificial intelligence stops being a pilot and becomes an operational practice. The company (NYSE: PSTG) announced platform enhancements that extend coverage to public cloud, strengthen the control plane with automation and AI copilots, and tackle the costs and latency of AI workloads with new key-value cache integrations for multi-GPU inference and a next-generation data reduction engine.
Rob Lee, CTO of Pure Storage, summed up the message: “In the AI era, accessing data is everything. Managing it — not just storing — is the new foundation for AI readiness, spanning cloud to core and edge. Success depends on having data secure everywhere, accessible from anywhere, with a unified and consistent real-time experience at scale and for any workload.”
The proposal is supported by the concept of Enterprise Data Cloud: a unified architecture and operating model that promises end-to-end control, automation, and cyber-resilience across any environment — on-premises, hosted, or cloud — covering the full spectrum of block, file, and object data.
Shift to public cloud: Pure Storage Cloud and a native Azure service for migrating VMware workloads without refactoring
For IT leaders managing infrastructures, Pure introduces Pure Storage Cloud as the umbrella that unifies the data landscape so data is “exactly where it’s needed”, without tying customers to specific compute and preparing the ground for AI tools to leverage it.
The main innovation here is Pure Storage Cloud Azure Native, now available within the Azure portal. Through the Azure Native Integrations program, the company has built a “best-in-class” service for Azure VMware Solution:
- Reduces overhead and enables migration without refactoring, decoupling storage from compute.
- Offered as a native Azure service, managed end-to-end, with enterprise-grade resilience and efficiency.
This move is endorsed by Microsoft. “Organizations wanting to adopt AI and next-generation cloud services with Microsoft need to move data to Azure. For many, migrating storage-intensive VMware workloads has been a challenge,” explained Aung Oo, VP of Azure Storage. “That’s why we’re excited to partner with Pure Storage to offer a native Azure service for Azure VMware Solution. Together, we make it easier for customers to migrate large datasets to Azure and quickly take advantage of AI and analytics innovations.”
An intelligent control plane: fleet automation, scalable Kubernetes, and AI copilots integrated via MCP
Pure enhances the “brain” of its platform with a control plane that automates how data is provisioned, protected, and governed across the entire estate.
- NEW | Portworx® + Pure Fusion (GA H1 FY27).
This integration extends fleet management capabilities to containerized applications and KubeVirt-based VMs, maintaining a single control plane for data and storage of all workloads — legacy and cloud-native — locally or horizontally. - Pure1 AI Copilot | Expansion.
Launched as GA at Pure//Accelerate 2025, the copilot combines dashboards with natural language conversation to simplify system management.- NUEVO | Portworx Pure1 AI Copilot (GA now).
The first AI-powered platform assistant for Portworx clients: allows querying Kubernetes/Portworx clusters like FlashArray, monitoring at scale through instant interaction with an AI agent via the Pure1 Copilot interface. - **Integration with Model Context Protocol (MCP) (GA Q4 FY26).
Pure1 AI Copilot will function as both server and client for MCP, integrating with internal systems (hardware performance, subscriptions, security) and external tools (analytics engines, application monitors). The result is a contextual intelligence layer that detects error patterns, summarizes findings, and suggests remediations via conversational interface.
- NUEVO | Portworx Pure1 AI Copilot (GA now).
The operational takeaway is clear: less reliance on experts for each subsystem, increased ability to provision, diagnose, or optimize from an assistant that understands the environment’s context.
Make AI “work for you”: key-value cache for multi-GPU inference and deep reduction without sacrificing performance
AI operation consumes resources — compute, storage, and power. Pure Storage tackles this with two technical solutions focused on efficiency and latency:
- NUEVO | Key Value Accelerator + NVIDIA Dynamo (GA Q4 FY26).
This is a high-performance key-value cache designed to accelerate inference in multi-GPU environments. The upcoming integration with NVIDIA Dynamo aims to improve scalability and increase inference speed while reducing computational overhead and carbon footprint. NVIDIA supports the approach. “Efficient inference requires fast data access so AI agents can respond in fractions of a second,” explained Dion Harris, NVIDIA senior director of HPC, Cloud & AI Infrastructure. “Integrating Pure’s Key Value Accelerator with NVIDIA Dynamo provides a plug-and-play path toward faster and more scalable inference — eliminating complexity, reducing latency — to maximize NVIDIA’s AI infrastructure utilization.” - NUEVO | Purity Deep Reduce (GA H1 FY27).
A next-generation data reduction engine that uses pattern recognition and similarity reduction to achieve high reduction ratios without significant performance penalties. This results in less capacity consumption, lower costs, and reduced energy.
Simultaneously, the FlashArray portfolio expands: FlashArray//XL 190 (GA Q4 FY26) and the series FlashArray//X R5 and FlashArray//C R5 (GA now). Pure’s architecture is designed so that enterprise applications and modern AI/ML pipelines coexist on the same scalable infrastructure, supporting both extremely low latency and high bandewidth and concurrency.
A unified data plane: ongoing data mobility, governance, and cybersecurity “out of the box”
Behind the announcements lies an operational vision: seamless data mobility without surprises, experience consistency, and cross-layer automation in distributed infrastructures. With the Enterprise Data Cloud, Pure promises to manage block, file, and object data by policy; enable each system and site to contribute capacity and performance through a shared virtual layer; and allow full control from a single console.
The company also emphasizes cyber-resilience (backups, immutability, logical air-gap, rapid recovery) as essential for preventing AI system failures due to security or operational incidents. In their view, “controlling the data” — not just storing it — is what enables the transition from testing to production in AI projects.
Timeline and Cautions: What’s Already Here and What’s Coming
Not all features are available simultaneously. Pure outlines the availability windows:
- Already available: Pure Storage Cloud Azure Native, Portworx Pure1 AI Copilot, FlashArray//X R5, and FlashArray//C R5.
- GA Q4 FY26: Pure1 AI Copilot with MCP, Key Value Accelerator + NVIDIA Dynamo, FlashArray//XL 190.
- GA H1 FY27: Portworx + Pure Fusion, Purity Deep Reduce.
The company includes a disclaimer of forward-looking statements: some functionalities may not be generally available today; timelines may vary; and purchasing decisions should be based on services and features currently available. In other words, this is a live roadmap.
What does this mean for a company seeking “controlled AI”?
Reduced friction in moving and governing data: with Pure Storage Cloud and the native Azure service, storage-heavy VMware workloads effortlessly jump to Azure without refactoring applications, decoupling data from compute.
Unified operational plane: integration of Portworx–Pure Fusion and Pure1 copilots (now also with MCP) reduces the need for specialists per subsystem and facilitates declarative automation and chat-based operations.
Faster, more efficient inferences: the key-value focus with NVIDIA Dynamo addresses the latency and scale of multi-GPU inference.
Cost control: with Deep Reduce and the FlashArray consolidation, capacity and energy consumption are optimized, aligning TCO with increasing use cases.
Key takeaways for data and platform teams
- Map which data moves to Azure and how to decouple storage from compute in AVS via Pure Storage Cloud.
- Pilot Portworx with Pure Fusion in a controlled domain (microservices + KubeVirt) to validate orchestration and fleet management from a single plane.
- Activate Pure1 AI Copilot and test MCP with internal systems (performance, inventory, security) and external tools (analytics, APM) for diagnostics and guided actions.
- Evaluate the impact of the Key Value Accelerator on inference patterns: model sets, batches, context size, data bandewidth, and cache schemas.
- Measure the effect of Deep Reduce with real data (patterns and similarity) to project savings without degrading SLA.
Conclusion: managing data (and AI) as one thing
This announcement aims to bridge the gap between storage and AI operation. If data is the critical asset for insights, automation, and competitive advantage, the foundations must enable access “everywhere and from anywhere”, with governance and resilience by design. The native Azure support for VMware, the control plane linking Kubernetes to copilots, the key-value cache for inferences, and the next-gen data reduction engine create a platform where AI workloads coexist with enterprise applications within a single flow, with predictable costs.
There remains a journey ahead — several pieces arriving between Q4 FY26 and H1 FY27 — but the direction aligns with market demand: manage data, not just store; and operate AI with the same principles as the rest of the business: security, governance, automation, and financial clarity.
Frequently Asked Questions
What is Pure Storage Cloud Azure Native, and how does it help migrate VMware workloads to Azure without refactoring?
It is a native Azure, fully managed service designed for Azure VMware Solution. It enables decoupling storage from compute, migrating storage-intensive workloads without refactoring, and reduces overhead, offering enterprise-grade resilience and efficiency directly through the Azure portal.
How does Portworx integrate with Pure Fusion, and what benefits does it bring to Kubernetes and KubeVirt?
The (expected GA H1 FY27) integration allows managing all data and storage for workloads — including containerized applications and KubeVirt VMs — from a single control plane with fleet management, automation, and consistent policies across local and hybrid environments.
What is the purpose of the Key Value Accelerator with NVIDIA Dynamo in AI inference?
It is a high-performance key-value cache that, when integrated with NVIDIA Dynamo (expected GA Q4 FY26), helps accelerate and scale multi-GPU inference, reducing latency and computational overhead, while maximizing NVIDIA’s infrastructure utilization.
What is Purity Deep Reduce, and how does it impact storage costs?
It is a next-generation data reduction engine (planned GA H1 FY27) that applies pattern recognition and similarity reduction to achieve high reduction ratios without significant performance loss, lowering capacity use, costs, and energy consumption.
via: investor.purestorage