Hewlett Packard Enterprise: Accelerating AI training with NVIDIA-powered turnkey solutions.

Hewlett Packard Enterprise (HPE) has launched a supercomputing solution for generative artificial intelligence, specifically designed for large enterprises, research institutions, and government organizations. This new system is meant to accelerate the training and tuning of AI models using private data sets. It includes a set of software that allows customers to train and fine-tune models, develop AI applications, and supercomputers with liquid cooling, accelerated computing, networks, storage, and services to help organizations unlock the value of AI more quickly.

Justin Hotard, Executive Vice President and General Manager of HPC, AI & Labs at Hewlett Packard Enterprise, highlighted the need for specific solutions to effectively support generative AI, emphasizing the collaboration with NVIDIA to deliver a native AI solution that significantly accelerates the training of AI models and their outcomes.

Software Tools to Drive AI Applications

Software tools are key components of this generative AI supercomputing solution. Integrated with HPE Cray supercomputing technology, based on the same powerful architecture used in the world’s fastest supercomputer and powered by the industry-leading NVIDIA Grace Hopper GH200 superchips, this system offers unprecedented scale and performance necessary for large AI workloads like training large language models (LLM) and deep learning recommendation models (DLRM). Using HPE’s Machine Learning Development Environment in this system, the 70 billion parameters Llama 2 model was fine-tuned in less than 3 minutes, directly translating to a faster time to value for customers. HPE’s advanced supercomputing capabilities, supported by NVIDIA technology, enhance system performance by 2-3X.

Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, commented on how generative AI is transforming all industrial and scientific efforts, and how the collaboration with HPE will provide customers with the performance needed to make breakthroughs in their generative AI initiatives.

A Comprehensive AI Solution

The generative AI supercomputing solution is an integrated offering, specifically built for AI, that includes end-to-end technologies and services:

– AI/ML Acceleration Software: A set of three software tools that will help customers train and fine-tune AI models and build their own AI applications.
– Designed to Scale: Based on the HPE Cray EX2500, an exascale-class system, and with the industry-leading NVIDIA GH200 Grace Hopper superchips, the solution can scale up to thousands of graphics processing units (GPUs) with the ability to dedicate full node capacity to a single AI job, for a faster time to value.
– Real-Time AI Network: HPE Slingshot Interconnect offers a high-performance, Ethernet-based network designed to support exascale-class workloads. This tunable interconnect supercharges system performance by enabling an extremely fast high-speed network.
– Turnkey Simplicity: The solution is complemented by HPE Complete Care Services, providing global experts for setup, installation, and full lifecycle support to streamline AI adoption.

Towards a More Sustainable Future of Supercomputing and AI

By 2028, AI workload growth is estimated to require approximately 20 gigawatts of power within data centers. Customers will need solutions that offer a new level of energy efficiency to minimize their carbon footprint. Energy efficiency is key to HPE’s computing initiatives, delivering solutions with liquid cooling capabilities that can achieve up to a 20% improvement in performance per kilowatt compared to air-cooled solutions, while also consuming 15% less energy.

Currently, HPE supplies most of the world’s most efficient supercomputers, using direct liquid cooling (DLC), a feature included in the generative AI supercomputing solution. This technology not only efficiently cools systems but also reduces energy consumption for compute-intensive applications.

HPE is uniquely positioned to help organizations deploy the most powerful computing technology to advance their AI goals while reducing their energy usage. This combination of industry-leading performance, energy efficiency, and comprehensive support puts HPE at the forefront of the AI revolution, providing enterprises, research institutions, and government organizations with the tools needed to drive innovation and make significant advancements in their generative AI initiatives.

With this new turnkey solution, powered by the quadruple NVIDIA Grace Hopper GH200 Superchip configuration, HPE not only accelerates AI training but also sets a new standard for AI adoption in research centers and large enterprises, ensuring a significant improvement in time to value and accelerating training by 2-3 times. HPE’s promise to deliver more sustainable supercomputing technology is a testament to their commitment to responsible innovation and the advancement of computing for the future of AI. Source: HP


This article provides an overview of Hewlett Packard Enterprise’s new supercomputing solution for generative artificial intelligence, highlighting its key features, benefits, and industry impact. The collaboration with NVIDIA, the focus on software tools, scalable design, real-time network capabilities, and commitment to energy efficiency showcase HPE’s dedication to pushing the boundaries of AI technology while promoting sustainability in computing.

Scroll to Top