The Taiwanese company bursts onto the fair with modular solutions ready to be deployed in enterprise, cloud, and artificial intelligence environments, highlighting its evolution as a full-stack IT infrastructure provider.
At Computex 2025, MSI unveiled its most ambitious range of enterprise servers, consolidating its position as a comprehensive provider of infrastructure solutions. Under the slogan of a modular, open, and optimized for workload future, the company showcased its new platforms for AI, cloud data centers, and enterprise computing environments at booth #J0506, with configurations adapted to EIA, OCP ORv3, and NVIDIA MGX standards, as well as new DGX stations for advanced AI.
From Racks Ready for Deployment to Modular Infrastructures
MSI has made a strong commitment to complete rack integration, presenting pre-configured systems optimized thermally for various workloads:
- EIA 19”: for private cloud environments and virtualization.
- OCP ORv3 21”: with 48V energy efficiency and OpenBMC compatibility, ideal for hyperscalable deployments.
- AI Racks with NVIDIA MGX: optimized for large-scale training and inference workloads, based on modular architecture and NVIDIA Spectrum-X networking.
Core Computing and Open Compute for Cloud Infrastructure
MSI has expanded its Core Compute line with six servers based on DC-MHS architecture, equipped with AMD EPYC 9005 and Intel Xeon 6 processors, in 2U4N and 2U2N formats. These platforms offer:
- High computing density, with either liquid or air cooling.
- Compatibility with OCP DC-SCM, PCIe 5.0, and DDR5.
- Solutions tailored for private, hybrid, and edge cloud.
One of the standout models is the CD281-S4051-X2, an ORv3 2-node server with twelve E3.S NVMe drives per node and OpenBMC-compatible management.
AI Platforms with NVIDIA MGX and DGX Stations
MSI’s AI portfolio includes MGX 2U and 4U systems supporting up to 8 dual-width PCIe 5.0 GPUs, as well as the new DGX CT60-S8060 station, equipped with the NVIDIA GB300 Grace Blackwell superchip.
Featured Models:
- CG480-S5063 (Intel) and CG480-S6053 (AMD):
- Up to 8 dual-slot GPUs.
- 32 DDR5 modules and up to 20 NVMe bays.
- Designed for training LLM models, NVIDIA OVX, and digital twin simulations.
- CG290-S3063:
- Compact 2U MGX server.
- 4 GPUs with a focus on edge AI.
- DGX Station CT60-S8060:
- Up to 20 PFLOPS of AI performance.
- 784 GB of unified memory.
- Networking with ConnectX-8 at 800 Gb/s.
- Operates as a local station or multi-user R&D node.
Total Boost for the NVIDIA Ecosystem
The strategic alignment with NVIDIA is reflected in the compatibility of MSI’s solutions with NVIDIA AI Enterprise, MGX, DGX, and Spectrum-X, thus providing a comprehensive and scalable infrastructure for generative AI adoption, foundational model training, and HPC workloads.
Conclusion
With this rollout, MSI demonstrates that it has evolved beyond traditional hardware, moving towards a full-stack provider model capable of delivering infrastructure ready for AI, cloud, and edge computing, combining flexibility, performance, and thermal efficiency.