Broadcom has announced Tomahawk® 6 – Davisson (TH6-Davisson), its third generation of Ethernet switching with co-packaged optics (CPO) and the market’s first 102.4 Tbps system with integrated optics. The company claims it doubles the bandwidth of any available CPO switch today, with a 70% reduction in power consumption compared to traditional pluggable solutions (more than 3.5× less power per bit) and significant improvements in link stability—key ingredients for scaling AI clusters in both “scale-up” (more capacity per node) and “scale-out” (more nodes), and now “scale-across” (between data centers).
“By improving *link stability* and *energy efficiency*, we enable smoother and more cost-effective model training,” said Near Margalit, VP and GM of Broadcom’s Optical Systems division. “We designed this platform to *scale large AI clusters* by meeting three key optical interconnection imperatives: *higher FLOPs utilization*, *fewer disruptions*, and *greater cluster reliability*.”
Why it matters: AI has turned the network into the bottleneck
Large-scale training generates massive *east-west traffic* between *XPUs/GPUs* and storage nodes; thousands of servers constantly exchange gradients, checkpoints, and datasets. *Pluggable optics* are beginning to *strain*: increased power consumption, added latencies due to signal conditioning, and space and heat dissipation constraints on the board. *CPO* integrates optics within the *same package* as the *switch ASIC*, shortening internal electrical routes and *eliminating* much of the signal conditioning, which *reduces losses*, *improves efficiency*, and *stabilizes links*.
What is CPO (Co-Packaged Optics): A architecture where *optical engines* are *integrated* directly onto the *same substrate* as the Ethernet *switch*, instead of using front-pluggable modules (QSFP-DD/OSFP). Typical advantages: *less energy and latency*, *higher density*, and *less manufacturing variability*, resulting in *fewer link flaps* (downed connections and reconnections).
TH6-Davisson: architecture and benefits
Broadcom combines its CPO expertise with *optical engines* manufactured using *TSMC COUPE™* (Compact Universal Photonic Engine) and advanced *multi-core packaging* at the substrate level. The outcome, according to the company:
- 102.4 Tbps of *switching capacity* within a single CPO system.
- 200 Gb/s per channel (double that of its 2nd gen TH5-Bailly), with *interoperability* with DR-based transceivers and *LPO/CPO* links at 200 Gb/s per channel.
- Significantly better *link stability*—fewer *link flaps*—by removing the variability inherent in pluggable modules.
- -70% *power consumption* in the *optical interconnect* (more than *3.5× less* than pluggables), crucial for data centers with *power constraints*.
- Smaller *footprint* (space and routing) and *better heat dissipation* thanks to the close optical-silicon integration.
Broadcom supports these claims with a study on flaps in *TH5-Bailly* links, extending that work in Davisson by placing *optics* *in-package* with the *switch*.
Specifications (TH6-Davisson BCM78919)
- Capacity: 102.4 Tbps of *switching*.
- Optics: 16 × 6.4 Tbps *Davisson DR Optical Engines* (CPO).
- Link speed: 200 Gb/s.
- Lasers: ELSFP replaceable in the field (detachable laser modules).
- Cluster scalability: *scale-up* to 512 XPUs per tree; in two-tier topologies, over 100,000 XPUs with 200 Gb/s links.
- Standards: IEEE 802.3; compatible with 400G/800G.
- Availability: *early customer access sampling*; Broadcom suggests contacting sales for samples and pricing.
Energy: 70% less power in the optical interconnect
Energy efficiency is achieved by *reducing* signal conditioning (redrivers, retimers) and minimizing *tracks* and *reflections* between the *switch* and the optics. Broadcom numbers:
- -70% *power consumption* of the *optical interconnect* vs. pluggables — more than *3.5× less power*.
- Fewer auxiliary components — leading to less *complexity*, *higher MTBF*, and lower *operational costs*.
In an *AI factory* with tens of thousands of links at 800G and moving towards 1.6T, each percentage point of energy savings in the optical layer *matters*. Davisson aims to *get ahead* of the move to 1.6T with a stable, efficient *200 Gb/s per channel* base.
Stability: fewer flaps, more effective FLOPs
Small *link interruptions* can cause noticeable *usage drops* in GPUs/XPU (retries, resynchs, job requeues). By *co-packaging* the optics with the *ASIC switch*, Davisson *eliminates* some of the tolerances and variability associated with pluggable chains (connectors, cages, thermal issues). The goal: *less* link *flap* events, *greater* stability, and consequently, *more effective FLOPs* and *reduced time-to-done* (TTD) for AI jobs.
Interoperability and “future-proofing”
Davisson operates at *200 Gb/s per channel* and can *interconnect* with *DR transceivers*, as well as *LPO* and *CPO*, maintaining *same-channel throughput*. This supports a *frictionless migration* from current *800G* networks (8×100G or 8×200G per port depending on modulation) and provides a reasonable bridge to *1.6T*.
Roadmap for CPO: Broadcom indicates its *4th generation* will double *per-channel* bandwidth to *400 Gb/s*, with higher energy efficiency—making way for *1.6T/3.2T* networks and even denser fabrics.
What does a hyperscaler or AI cloud gain from 102.4T CPO?
- Density and Tbps per RU: more capacity in a single chassis, fewer *top-of-rack* and *spine* units for the same bandwidth.
- pJ/bit competitiveness: lowers the energy cost per transported bit.
- MTBF and maintenance: in-package optics with *replaceable ELSFP lasers*; fewer connectors and modules = fewer failures.
- Stability: fewer *flaps* = *more useful cluster performance* and fewer *disruptions*.
- Path to 1.6T: With 200 Gb/s per channel today and 400 Gb/s in the next generation, the route to 1.6T is established.
What partners say
- Celestica: values the combination of *optical integration*, *efficiency*, and *performance* as a foundation for the next wave of AI infrastructure.
- Corning: collaborates with Broadcom on complete *faceplate-to-chip* optical assemblies for Davisson systems.
- HPE: exploring TH6-Davisson for their upcoming *HPE Networking AI-native* line.
- Micas Networks: after millions of hours testing with *Bailly* CPO, perceives a “*tipping point*” for hyperscaler adoption; Davisson arrives “at the right moment”.
- Nexthop AI: highlights the *pJ/bit* and *scalability* with their hardened *SONiC*.
- TSMC: emphasizes the role of the *COUPE™* process to achieve *efficiency* and *performance*.
Two key questions for 2026 and beyond
- CPO vs. pluggables?
Pluggable optics won’t disappear— they’ll continue to dominate certain bandwidths and ranges. But for *large-scale AI*, *CPO* currently offers a compelling combination of *energy savings*, *stability*, and *density* that’s tough to match. The question is *when* (not if) each *hyperscaler* will cross the *TCO* threshold in favor of *CPO*. - Operation and supply chain?
CPO *changes* logistics around spare parts, manufacturing, and testing. Including *replaceable ELSFP lasers* in Davisson alleviates part of that challenge. Interoperability with *DR/LPO/CPO* at 200 Gb/s per channel also reduces the risks of gradual adoption.
Availability
Broadcom is *sampling* the TH6-Davisson BCM78919 to customers and *early access* partners. The company invites to *contact sales* for samples and pricing. More information about Broadcom’s CPO can be found on their corporate website.
Frequently Asked Questions
What exactly is a CPO switch and how does it improve over a switch with pluggable optics?
A “CPO” (“Co-Packaged Optics”) stacks optical engines directly *inside* the same *package* as the Ethernet *switch*. By shortening electrical paths and removing the need for redrivers/retimers, it *reduces power*, *lowers latency*, *improves stability* (fewer *link flaps*), and *increases density* compared to pluggable modules.
How does TH6-Davisson help a real AI cluster?
With *102.4T*, *200 Gb/s per channel*, and *deep optics-silicon integration*, Davisson *absorbs bursts* (e.g., *checkpoints*), *reduces disruptions*, and *increases* GPU/XPU utilization. The result: *faster training* and *better TCO*.
Does it fit with current 400/800G networks?
Yes. It’s *IEEE 802.3 compliant* and *interoperable* with *400G/800G* transceivers, as well as *DR/LPO/CPO* links at 200 Gb/s per channel. This enables *gradual migration*.
What’s the roadmap to 1.6T?
Broadcom is developing its *4th-generation* CPO with *400 Gb/s per channel* (double Davisson’s bandwidth), paving the way toward *1.6T* with *better per-bit efficiency*.
In summary: TH6-Davisson aims to be the *new benchmark* for *optical switching* in *AI networks*: *102.4 Tbps* per system, *70% less power* in optical interconnects, enhanced stability, and a clear path toward *400 Gb/s per channel*. If these figures are confirmed in production, the combination of *bandwidth*, *pJ/bit*, and *reliability* could accelerate hyperscalers’ shift to *CPO* as the foundation of their “AI factories.”