Here’s the translation:
From scalable infrastructure solutions to edge devices and personal computers with AI acceleration, the Taiwanese company is implementing a comprehensive strategy that covers the entire lifecycle of artificial intelligence.
In the context of the COMPUTEX 2025 trade fair, taking place from May 20 to 23 in Taipei, GIGABYTE Technology has announced its ambitious plan to lead the artificial intelligence ecosystem with a proposal that goes far beyond traditional hardware. Under the slogan “Omnipresence of Computing: AI Forward”, the company will showcase a comprehensive portfolio that ranges from large-scale model training infrastructures to edge deployment solutions and everyday applications in the home environment.
GIGAPOD: the core of scalable infrastructure
The heart of GIGABYTE‘s proposal will be its revamped GIGAPOD, a scalable GPU cluster designed for training large AI models. This platform supports the latest market technologies, including AMD Instinct™ MI325X and NVIDIA HGX™ H200 accelerators, and now incorporates GPM (GIGABYTE POD Manager), a proprietary resource management and orchestration system that optimizes performance and operational efficiency on a large scale.
One of the main innovations is the GIGAPOD with direct liquid cooling (DLC), based on the G4L3 series servers, capable of managing chips with TDPs over 1,000 W. This configuration, introduced in collaboration with Kenmec, Vertiv, and nVent, integrates power distribution, network architecture, and cooling into 4+1 racks, accelerating deployment times with comprehensive consultancy support from GIGABYTE.
From training to deployment
The transition from artificial intelligence training models to production use requires architectural flexibility. GIGABYTE addresses this need with platforms such as:
- NVIDIA GB300 NVL72: a fully liquid-cooled rack configuration combining 72 Blackwell Ultra GPUs and 36 Grace™ Arm architecture CPUs, designed for large-scale inference.
- OCP Servers: including an 8OU system with NVIDIA HGX™ B200 and Intel® Xeon®, and an ORV3 storage rack optimized for density and performance through JBOD design.
Also showcased are modular solutions for various workloads:
- Accelerated computing: air- or liquid-cooled servers prepared for GPUs like Intel Gaudi 3, AMD Instinct MI325X, and NVIDIA HGX B300.
- CXL Technology: systems with Compute Express Link for shared memory between CPUs, crucial for real-time inference.
- High density: multi-node servers with high-core CPUs and NVMe/E1.S storage, developed in collaboration with ADATA, Kioxia, Solidigm, and Seagate.
- Edge and cloud platforms: blade solutions and nodes designed for thermal efficiency, energy efficiency, and operational versatility.
AI within everyone’s reach: from data centers to homes
Beyond data centers, GIGABYTE is also bringing AI to the end user and the network edge:
- Embedded Jetson Orin™ Systems: designed for industrial automation, robotics, and real-time computer vision.
- BRIX Mini PCs: now featuring integrated NPUs, compatible with Microsoft Copilot+ and Adobe AI tools, ideal for lightweight inference.
Additionally, the new Z890 and X870 motherboards, GeForce RTX 50 and Radeon RX 9000 graphics cards, and solutions like AI TOP, a local AI computing architecture that allows offloading memory processes and multi-node clusters, simplifying complex workflows, are highlighted.
In the consumer range, Microsoft Copilot+-certified laptops, high refresh rate OLED monitors, and features like the GIMATE AI agent, which allows direct voice control of hardware, enhancing productivity and daily usability, will be presented.
An AI-first proposal, from silicon to software
With this presentation, GIGABYTE demonstrates that it is no longer just about manufacturing motherboards or graphics cards, but about providing a complete, modular, and scalable platform to accelerate the adoption of artificial intelligence across all fields, from training in HPC clusters to home consumption.
In this way, the company extends its leadership from the cloud to the edge, positioning itself as a key player in the immediate future of intelligent computing.