Vultr has announced the availability of the GPU AMD Instinct™ MI355X, both on bare-metal servers and in cloud virtual machines, enhancing their infrastructure catalog for AI, HPC, and scientific simulations. This is one of the company’s most significant launches of the year, aiming to position itself as a competitive alternative to hyperscale providers amid high demand for accelerated computing.
CDNA 4: architecture optimized for AI and HPC
AMD’s new graphics cards are built on the CDNA™ 4 architecture, introducing improvements in energy efficiency and compute performance. Each unit features 288 GB of HBM3E memory, enabling the handling of massive datasets and cutting-edge AI models with fewer GPUs, thereby reducing operating costs.
They also support FP6 and FP4 precision formats, aimed at accelerating workloads in generative AI, inference, and large-scale model training, as well as high-complexity scientific simulations.
Two deployment options: bare metal and cloud GPU
Vultr offers the AMD Instinct MI355X in two main configurations:
- Bare Metal: direct hardware access with no virtualization overhead, ideal for organizations seeking maximum control and performance.
- Cloud GPU VMs: plans with up to 8 GPUs per instance, featuring ultralight virtualization that delivers near-bare-metal performance, combined with instant provisioning and on-demand scalability.
In the case of virtual machines, users also have access to snapshots, native backups, and support for nested virtualization, offering flexibility for hybrid environments and advanced testing.
Integration with the Vultr ecosystem
The new instances with MI355X integrate with Vultr’s suite of cloud services:
- Storage: Block Storage, File System, and Object Storage.
- Network: firewall, multiple VPCs, and redundant connectivity.
- DevOps: optimized images with AMD ROCm™ for accelerating AI and machine learning projects.
The company emphasizes that these plans are designed to support high-density, low-latency deployments, crucial for agentic AI environments, large language models (LLMs), and complex simulations.
Market implications
This announcement comes at a time of intense GPU supply pressure, where Nvidia dominates with its H100 and Blackwell series, but AMD aims to differentiate itself with greater price-performance efficiency. For startups and companies unable to access Nvidia’s costs, MI355X availability on Vultr provides a strategic alternative, with the added advantage of not relying on hyperscale providers.
Frequently Asked Questions
What’s the difference between using MI355X on bare metal or virtual machines in Vultr?
Bare metal offers maximum performance without virtualization overhead, while VMs with up to 8 GPUs allow for more flexible scaling, with immediate provisioning, snapshots, and backups.
What benefits does the HBM3E memory bring to these GPUs?
The 288 GB of HBM3E enable training larger models and managing complex datasets without bottlenecks, meaning fewer GPUs are needed for the same workload.
What workloads are AMD Instinct MI355X optimized for?
They’re ideal for generative AI, LLMs, HPC, scientific simulations, big data analysis, and high-performance computing environments.
How do they compare with Nvidia’s H100 or Blackwell?
Although Nvidia maintains an ecosystem and adoption advantage, AMD focuses on achieving a better price-performance ratio and greater energy efficiency, which may be more appealing to companies looking to optimize costs without sacrificing power.