Intel today has launched its latest innovations in artificial intelligence (AI) infrastructure, the Xeon 6 performance-core processor (P-cores) and the Gaudi 3 AI accelerators, which offer advanced solutions for businesses looking to improve efficiency and performance in their data centers. With these new advancements, Intel strengthens its commitment to an open ecosystem designed to deliver more power per watt and reduce total cost of ownership (TCO) in AI applications and high-performance workloads.
The Xeon 6 and Gaudi 3 are positioned as key pillars in the industry’s digital transformation, meeting the growing need for accessible and scalable infrastructure. Justin Hotard, Executive Vice President and General Manager of Intel’s Data Centers and Artificial Intelligence Group, stated: “The demand for AI is driving a massive transformation in data centers, and the industry needs hardware, software, and development tool alternatives. With the launch of Xeon 6 and Gaudi 3, Intel enables its customers to manage all their workloads with higher performance, efficiency, and security.”
The Xeon 6 performance-core processor doubles the capacity of its predecessor, designed to handle compute-intensive workloads such as AI, HPC (high-performance computing), and databases. The Xeon 6 features more cores, larger cache sizes, and improved memory bandwidth thanks to the inclusion of Multiplexed Rank DIMM (MRDIMM) technology. These advancements allow companies to optimize their AI systems and reduce energy consumption, particularly important in data center environments with energy constraints.
Gaudi 3 AI accelerators, on the other hand, are specifically designed for large-scale generative AI, with a 20% increase in processing performance and a 2x cost/efficiency improvement over the H100 in model inference applications like LLaMa 2. With 64 Tensor cores and eight matrix multiplication engines, Gaudi 3 is optimized to accelerate deep neural network calculations. It also incorporates 128 GB of HBM2e memory and 24 Ethernet ports of 200 Gigabits for scalable networks.
Gaudi 3 not only ensures seamless integration with AI frameworks like PyTorch, but also supports advanced models from Hugging Face, facilitating the development and implementation of AI in large organizations. This new accelerator has been deployed in collaboration with IBM to offer it as a service on IBM Cloud, allowing companies to reduce costs and optimize their AI performance.
With the introduction of these solutions, Intel aims to provide scalable and flexible AI infrastructure. The combination of Xeon 6 and Gaudi 3 offers competitive cost-performance options, enabling companies to optimize their workloads while minimizing total cost of ownership. Intel has also partnered with original equipment manufacturers (OEMs) like Dell Technologies and Supermicro to offer co-designed solutions that streamline deployment effectively.