NetApp introduces AFX and AI Data Engine: their “AI-ready” data platform with node-based scaling and native integration with NVIDIA and Azure

NetApp announced a set of products that strengthen its enterprise data platform for artificial intelligence, focusing on moving from pilots to production-ready agentic applications. The news comes in two fronts: NetApp AFX, an all-flash system with a disaggregated architecture based on ONTAP, and NetApp AI Data Engine (AIDE), a unified service that connects, governs, and accelerates data for AI on a single control plane. Both are aimed at speeding up RAG and inference in hybrid and multi-cloud environments, and can be purchased directly or through subscription via Keystone STaaS.

AFX: disaggregated all-flash storage for “AI factories”

NetApp AFX separates performance and capacity to allow each to scale independently. The system, built on ONTAP, offers secure multi-tenancy, cyber-resilience, and is designed for linear scalability up to 128 nodes, with TB/s bandwidth and exabyte-scale capacity. AFX is certified as storage for NVIDIA DGX SuperPOD and retains ONTAP’s classic data management features used by thousands of clients in mission-critical environments.

Complementing this, NetApp introduces DX50 nodes as data compute nodes that enable a global metadata engine to catalog corporate data assets in real-time and leverage accelerated computing. This metadata layer is designed to provide immediate visibility, reduce dispersed copies, and serve as a foundation for semantic search and AI pipelines that demand high access rates.

AI Data Engine: bringing AI to the data, not the data to AI

NetApp AIDE is a data service for AI covering everything from ingestion and preparation to serving generative applications. It offers a comprehensive and real-time view of the client’s ONTAP environment, detection of changes, synchronization without duplicates, and enforces security and privacy guardrails throughout the data lifecycle. It integrates with the NVIDIA AI Data Platform — including the NVIDIA AI Enterprise stack and NIM microservices — for vectorization and retrieval, adding advanced compression, semantic discovery, and policy-driven workflows.

AIDE will run natively on the AFX cluster on the DX50 nodes, bringing computation closer to the data to reduce latency and movement costs. NetApp also previews future support for RTX PRO Servers with RTX PRO 6000 Blackwell Server Edition, broadening acceleration options for use cases requiring greater versatility in compute layers.

ANF approaches Azure services without moving data

Alongside AFX and AIDE, the company unveiled two strategic enhancements to Azure NetApp Files (ANF):

  • Object REST API (public preview): allows direct access to ANF NFS/SMB data as objects, eliminating the need to copy to a separate object store. This way, datasets can connect to Microsoft Fabric, Azure OpenAI, Azure Databricks, Azure Synapse, Azure AI Search, Azure Machine Learning, and other services, reducing duplication and speeding up deployment.
  • Unified global namespace in Azure with FlexCache: enables visibility and editability of data stored in other ONTAP systems—on-premises or across multiple clouds—by fetching only requested blocks. Additionally, SnapMirror facilitates continuous copies, disaster recovery, and load balancing between environments, suited for demanding hybrid scenarios (e.g., EDA) and strict residency and latency policies.

Why it matters: from pilots to production, agentic AI

The joint offering tackles three common hurdles in enterprise AI adoption:

  1. Elastic data foundation: By disaggregating, organizations can expand IOPS and bandwidth for intensive stages (vectorization, semantic search, LLM prefill) without oversizing capacity; or increase capacity without paying for unnecessary performance.
  2. Governance and efficiency: AIDE turns previously manual functions — cataloging, detecting changes, synchronization, data protection — into managed services with consistent policies across on-prem and cloud, reducing project latencies and manual errors.
  3. Native bridges with Azure: ANF exposes file datasets to data and AI services without migrations; meanwhile, FlexCache and SnapMirror simplify load mobility, disaster recovery, and hybrid workflows between on-premises data centers and the public cloud.

Overall, the aim is that “data is always ready for AI”: findable, vectorized, governed, and accessible from any cloud with consistent performance and protection.

Usage models and availability

Both new products will be available in traditional deployment or via NetApp Keystone STaaS, a subscription model for on-demand consumption. The company will showcase sessions and demos at NetApp INSIGHT 2025 (Las Vegas, October 14–16), demonstrating complete RAG and inference workflows on their platform with NVIDIA acceleration.

What technical teams should evaluate

  • Architecture and scaling: fine-tuning the balance among node count, effective bandwidth, and capacity will impact total cost, especially in pipelines with peak vectorization or concurrent serving.
  • Governance and guardrails: translating policies between on-prem and cloud is crucial for regulated sectors; AIDE’s policy support helps audit and mitigate risks.
  • Data movement costs: ANF’s Object REST API reduces copies, but individual access patterns with Azure services will have their own profiles; modeling end-to-end performance and costs before scaling is advisable.
  • Future integrations: announced support for RTX PRO Servers expands acceleration options; hardware decisions should align with actual workload mix (light training, fine-tuning, long inference, multimodal agents).

In summary, NetApp aims to establish itself as the native AI data platform: AFX as its disaggregated all-flash base and AIDE as an integrated AI data engine, with direct connections to Azure services that minimize data movement and duplication. The goal is for companies to index, search, and serve unstructured information at scale — with governance and resilience — accelerating the transition from pilot cases to agentic applications that make a tangible business impact.

via: netapp

Scroll to Top