The UALink Consortium has officially released the UALink 200G 1.0 specification, a new open standard for high-performance interconnection designed to address the growing challenges of scalability and efficiency in artificial intelligence (AI) workloads and high-performance computing (HPC). The major innovation: it allows for the connection of up to 1,024 accelerators within a single cluster, with a capacity of 200 Gbps per line.
Unlike NVIDIA’s proprietary NVLink standard, UALink is emerging as an open, scalable, and multi-vendor alternative, driven by a consortium that includes giants like AMD, Intel, Apple, Google, AWS, Microsoft, Meta, HPE, and Cisco, among others.
An Architecture Designed for the New Era of AI
With the rise of large language models (LLMs), generative AI, and the need for real-time inference, interconnect infrastructure between accelerators has become a bottleneck. UALink proposes a low-latency and high-bandwidth architecture, designed from the ground up to optimize the performance of large-scale clusters, reduce costs, and minimize energy consumption.
Among its main advantages are:
- Deterministic Performance: UALink achieves up to 93% of effective peak bandwidth, with latencies comparable to PCIe switches but at Ethernet speed.
- Energy and Cost Efficiency: Thanks to a compact design and a lighter stack, it allows for reduced chip area, lower TCO, and simplified system architecture.
- Open and Collaborative Standard: More than 85 companies are developing UALink-compatible accelerators and switches, promoting interoperability and shared innovation.
Boosting the Open AI Ecosystem
UALink is also notable for its semantic memory approach, which facilitates direct communication between accelerators (peer-to-peer) through read, write, and atomic operations. This makes it a key tool for distributed architectures and for cloud service providers looking to scale their AI capabilities without relying on proprietary solutions.
“UALink is the only interconnect solution with memory semantics optimized for AI that is also open and efficient in power and costs,” explains Kurtis Bowman, president of the UALink Consortium.
According to data from the Dell’Oro Group, this specification “directly addresses the scalability challenge posed by modern AI,” with an interconnection model designed to grow at the pace required by next-generation computing.
A Real Alternative to NVLink
With NVIDIA’s dominance in AI accelerators, many industry observers have raised concerns about the dependence on its closed ecosystem, particularly regarding NVLink interconnection. UALink 200G 1.0 represents a concrete and collaborative response from the rest of the industry, with a vision focused on standardization, interoperability, and technological sovereignty.
The standard is now publicly available at ualinkconsortium.org, and the first compatible products are expected to hit the market in the coming quarters. For companies, data centers, and cloud providers looking to build more open, modular, and sustainable AI infrastructures, UALink is positioned as a key component of the future.