The artificial intelligence revolution isn’t just measured by the power of chips training ever-larger models. It’s also gauged by the ability to connect thousands of GPUs as if they were a single digital brain. At this critical juncture, Nvidia has taken a decisive step: integrating light as a communication medium into its data center platforms. Their road map envisions silicon photonics and co-packaged optics (CPO) moving from promising innovation to a structural requirement for next-generation AI infrastructure starting in 2026.
Nvidia’s presentation at the 2025 Hot Chips conference clearly demonstrated their commitment: upcoming switches Quantum-X and InfiniBand, along with the Spectrum-X Photonics for Ethernet platform, will usher in a new era of GPU interconnection, where copper cables will be inadequate, and light will ensure speed, efficiency, and reliability.
In large AI clusters, thousands of GPUs must operate as one machine. This means data must flow between them with minimal latency and transfer rates that reach unprecedented levels. The traditional model, based on copper cables and pluggable optical modules, has become unsustainable. At speeds of 800 Gb/s, electrical losses can reach up to 22 decibels over 200 Gb/s channels, requiring compensation circuits that boost power per port to up to 30 W, generating heat, complexity, and potential failures.
With CPO, the approach shifts: the optical engine is integrated directly alongside the switch ASIC, converting signals into light almost immediately. This reduces losses to just 4 decibels and cuts power per port to 9 W. The difference is significant: Nvidia estimates that CPO achieves 3.5 times greater energy efficiency, 64 times better signal integrity, ten times higher resilience, and up to 30% faster deployment, thanks to a simpler architecture that’s easier to install and maintain.
Nvidia’s roadmap closely follows TSMC’s COUPE program, which develops compact universal photonic engines. The strategy unfolds in three phases:
- First Generation (2026): optical engines for OSFP connectors with 1.6 Tb/s transfer and lower power consumption.
- Second Generation: integration into CoWoS packages with co-packaged optics, reaching 6.4 Tb/s at the motherboard level.
- Third Generation: targeting 12.8 Tb/s within processors, with lower latency and energy use.
This will enable near-light-speed internal chip communication—an essential leap for generative AI and multimodal models.
The first major launch will come with Quantum-X InfiniBand switches expected in early 2026, delivering 115 Tb/s throughput with 144 ports at 800 Gb/s each. They will feature an ASIC capable of 14.4 TFLOPS of network processing, utilizing Nvidia’s fourth-generation SHARP protocol to accelerate collective operations with reduced latency. These liquid-cooled switches highlight the growing power density in data centers.
Beginning in mid-2026, Nvidia will roll out Spectrum-X Photonics for Ethernet, based on the Spectrum-6 ASIC chip. Two main models will be introduced: SN6810 with 102.4 Tb/s bandwidth and 128 ports at 800 Gb/s, and SN6800 scaling up to 409.6 Tb/s with 512 ports at the same speed. Both will employ liquid cooling and be prepared for massive-scale AI clusters.
Nvidia emphasizes that silicon photonics development is no longer optional. The scale of AI clusters—with tens of thousands of GPUs working in parallel—makes architectures based on copper and pluggable modules unviable. The new CPO switches will eliminate thousands of discrete components, simplifying installation, reducing energy consumption per connection, and leading to faster cluster activation, improved reliability, and greater sustainability in data centers competing worldwide for electricity and cooling resources.
This shift isn’t exclusive to Nvidia—competitors like AMD, which recently acquired the photonics startup Enosemi to stay competitive in integrated photonics, are pursuing similar paths. The move toward photonics marks a turning point: the battle between manufacturers is no longer just about GPU cores or software but now hinges on the efficiency and scalability of interconnections uniting these enormous artificial brains.
As generative AI, multimodal models, and systems integrating vision, text, audio, and reasoning in real-time rise, ultra-efficient clusters will become even more critical. Nvidia’s investment in silicon photonics and co-packaged optics isn’t just a technical upgrade—it’s a survival strategy to maintain leadership in a data center market projected to surpass one trillion dollars in the next decade.
The future of AI hinges on light—illuminating the path and serving as the vehicle for the data that underpins digital knowledge.
Frequently Asked Questions (FAQs)
What are co-packaged optics (CPO)?
They are integrated optical engines directly with the network ASIC that convert electrical signals into light almost instantly, reducing losses, power draw, and complexity compared to traditional pluggable modules.Why is Nvidia pushing photonics in its data centers?
Because AI clusters require connecting thousands of GPUs at extreme speeds. Copper can’t meet these demands over distances; silicon photonics ensures low latency and high energy efficiency.What improvements do CPO offer over pluggable optical modules?
Up to 3.5 times more energy efficiency, 64 times better signal integrity, ten times higher resilience, and 30% faster deployment.Which competitors are following a similar path to Nvidia?
AMD, which acquired Enosemi to advance in integrated photonics solutions, along with other companies researching large-scale optical technologies for data centers.
Sources: Nvidia News, Tom’s Hardware, Nvidia News.

