In the days following CES 2026, while NVIDIA was showcasing Rubin as its next big bet for “AI factories,” a list began circulating among investors and analysts highlighting supposed “official suppliers” of the platform. The list, shared across networks and market communities, is categorized—photonic, memory, power silicon, electrical systems, packaging, substrates, integration, and OEMs—listing companies that, on paper, fit the complexity of a rack-scale system like Vera Rubin NVL72.
Context is key: Rubin is not marketed as an isolated GPU but as a complete architecture comprising six chips and an ecosystem of networking, cooling, and software designed to push AI performance to industrial scale. The company describes NVL72 as a “rack that operates as a single machine” and confirms that the platform will arrive through partners in the second half of 2026.
Rubin as a “product”: less loose chip and more industrial system
Rubin’s presentation emphasizes one idea: the bottleneck is no longer just silicon fabrication but assembling an entire system capable of moving data at extreme speeds, with colossal power consumption and critical infrastructure reliability. This includes the Vera CPU, the Rubin GPU, the sixth-generation NVLink, the ConnectX-9 SuperNIC, the BlueField-4 DPU, and the component explaining much of the recent buzz among suppliers: Spectrum-6 Photonics with co-packaged optics (CPO).
The last piece—co-locating photonics next to networking silicon to reduce consumption and increase density—is what makes the supplier debate more than just a stock market exercise: technological change demands a network of specialized partners, not something improvised in months.
What is documented: NVIDIA’s photonics ecosystem
It’s important to differentiate confirmed facts from suggestions. NVIDIA has indeed published lists of partners in its silicon photonics / co-packaged optics ecosystem, featuring names that match those in the viral “Photonics” block: TSMC, Browave, Coherent, Corning, Fabrinet, Foxconn, Lumentum, and Sumitomo Electric, among others.
This connection between Rubin and that constellation of companies is not coincidental, as Rubin integrates switches and next-generation networking where photonics becomes a key efficiency lever. However, recognizing some partners in NVIDIA’s photonics sphere does not imply that the entire list—including power, substrates, or OEMs—is “official” for Rubin in the same sense.
Packaging and testing: the bottleneck that determines timely delivery
There is often a shortfall in advanced packaging. Modern AI accelerators rely on 2.5D/3D technologies to integrate chiplets and HBM memory, involving firms like Amkor and SPIL. NVIDIA has announced agreements and collaborations with Amkor and SPIL for packaging and testing capabilities, especially as part of its strategy to strengthen the supply chain in the U.S.
At the same time, names like ASE frequently surface in discussions about advanced packaging tied to AI production cycles; Reuters highlights SPIL (ASE’s affiliate) as a long-standing NVIDIA supplier, though the specifics of operational arrangements per product generation tend to be discreet.
Memory: HBM4 and the “triangle” of SK hynix, Samsung, and Micron
According to specialized media, Rubin is expected to use HBM4 memory configurations, placing major memory manufacturers—SK hynix, Samsung Electronics, and Micron—at the center of the landscape.
That said, listing these companies as “official providers” does not mean NVIDIA discloses specific contracts. The industrial pattern remains consistent: with each platform upgrade, HBM supply and integration with packaging influence capacity, pricing, and schedules.
Arm in “Compute”: a simple label for a more nuanced reality
The viral list places Arm as a “Compute” provider. Strictly speaking, Rubin includes an Arm-based CPU (Vera), and technical analyses have noted that NVIDIA combines Arm compatibility with proprietary core developments.
Once again, it’s reasonable to see Arm in the tech “tree,” but turning that into concrete commercial confirmation requires precision that is rarely found in public documents.
Energy, integration, and OEMs: the less “glamorous” parts that make the factory work
The list also mentions “power silicon” (from Texas Instruments or Infineon to Monolithic Power Systems) and “power systems” (like Schneider Electric, Eaton, or Vertiv). Here, the fit is more conceptual: a rack like NVL72, with liquid cooling and integrated network infrastructure, drives the demand for power conversion, electrical distribution, and high-efficiency components.
Regarding industrial integration, there is a stronger thread: NVIDIA has cited collaborations with Foxconn and Wistron within its manufacturing and assembly chain for AI infrastructure, at least as part of its production and supply resilience strategy.
In the “Server OEMs” category, mentions include Dell and Super Micro Computer. They are active in AI servers but jumping to the conclusion that they are “official Rubin suppliers” is unwise, as that depends on specific agreements and SKUs, which vary by region, customer, and timing.
A prudent interpretation: more than an “official” list, a map of bottlenecks
The true value of this list isn’t necessarily whether each name is “locked in” for Rubin but what it reveals about the 2026 cycle: AI infrastructure depends on packaging capacity, photonics, HBM memory, energy, and scale assembly. In this landscape, Rubin acts as a magnet: attracting suppliers where industry sees bottlenecks.
For a tech publication, the story isn’t about guessing ticker symbols. It’s about recognizing that the next generation of AI relies not just on transistors but on an industrial supply chain capable of delivering complete systems on time and with sustainable performance.
FAQs
What is NVIDIA Vera Rubin NVL72 and why is it described as “rack-scale”?
It’s a configuration of the Rubin platform designed so that an entire rack operates as a single machine, integrating GPU, CPU, networking, and data acceleration for massive AI workloads.
What role do silicon photonics and co-packaged optics (CPO) play in Rubin?
They enable optical links to be placed much closer to networking silicon, reducing power consumption and improving density and reliability in data center networks focused on AI.
Why is advanced packaging critical for accelerators with HBM?
Because HBM memory and chiplets need 2.5D/3D technologies for ultra-high bandwidth integration; packaging and testing capacity become bottlenecks.
Is there an official public list of Rubin’s complete suppliers?
Beyond documented partners in areas like photonics or announcements related to packaging and manufacturing, NVIDIA typically does not publish a comprehensive, category-specific list.

