Pure Storage, a leader in advanced data storage technology, has announced its joining of the Ultra Ethernet Consortium (UEC), an initiative of the Linux Foundation. This consortium is dedicated to developing an open and accessible Ethernet-based architecture to accelerate critical artificial intelligence (AI) and high-performance computing (HPC) applications that require intensive data processing.
As a storage platform supporting the most advanced AI initiatives, Pure Storage is committed to contributing to the definition and integration of UEC’s technological standards. The company will work on developing a platform that not only meets the consortium’s standards, but also optimizes the performance of enterprise AI and HPC workloads through Ethernet. This approach will enable faster innovation and reduce time to market.
Importance for the industry
In their effort to achieve their AI goals, companies face significant challenges with existing networking solutions, as they are difficult to manage and scale, hindering the flexibility and efficiency needed for growing AI workloads.
To address these challenges, UEC aims to advance Ethernet technology, offering a scalable and efficient solution to support innovation in AI.
The growing adoption of Ethernet in data centers, due to its lower total cost of ownership, broad interoperability, and proven reliability, has made it the foundation of many of the world’s largest AI clusters. Advancements in Ultra Ethernet standards will allow companies to optimize their existing investments in AI and HPC, while implementing high-performance applications, driving innovation, boosting productivity, and improving operational efficiency.
Key highlights
By contributing to the standardization and growth of high-performance Ethernet for large-scale AI and HPC initiatives, Pure Storage will accelerate platform development to support UEC’s long-term standards, enabling:
- Faster innovation: As AI and HPC workloads accelerate, Ultra Ethernet-powered AI infrastructure will allow for shorter task completion times and higher iteration frequency, speeding up model creation, training, and inference. This will result in faster innovation, market deployment, and insights.
- Cost-effective and efficient operations: Increased computation, network, and link speeds, along with lower latency and higher network utilization, provide companies with cost-effective and easily manageable high-performance Ethernet-based AI and HPC networking and storage.
- More choices for customers: Customers will have more networking options between IB and Ethernet, allowing them to expand their Ethernet network investments to AI and HPC applications they are adopting.