OpenAI and Broadcom Sign Partnership to Deploy 10 GW in AI Accelerators Designed by OpenAI: Ethernet at the Helm and Timeline Through 2029

OpenAI and Broadcom announced a multi-year collaboration to develop and deploy 10 gigawatts (GW) of AI accelerators and network systems in OpenAI data centers and partner locations. The joint note, dated in San Francisco and Palo Alto on October 13, 2025, states that OpenAI will design the accelerators and systems, while Broadcom will co-develop and manufacture the racks and will contribute its Ethernet, PCIe, and optical portfolio to scale the clusters both in “scale-up” and “scale-out” configurations. The rack deployment is planned to start from the second half of 2026 and to be completed by the end of 2029.

The company behind ChatGPT emphasizes that building its own chips allows “embedding learned insights while developing cutting-edge models and products directly into the hardware,” with the goal of unlocking “new levels of capacity and intelligence.” The announcement come with a signed term sheet for racks that incorporate Broadcom’s accelerators and network solutions, signaling progress in the supply chain towards open, scalable, and efficient clusters.

Sam Altman, co-founder and CEO of OpenAI: “Partnering with Broadcom is a critical step toward building the infrastructure that unlocks AI potential and delivers real benefits to people and businesses. Developing our own accelerators adds to the ecosystem of partners creating the capacity needed to push the frontiers of AI.”

Hock Tan, President and CEO of Broadcom: “The collaboration with OpenAI marks a decisive moment in the pursuit of general artificial intelligence. We are excited to co-develop and deploy 10 GW of accelerators and next-generation network systems to pave the way for the future of AI.”

Greg Brockman, co-founder and President of OpenAI: “By building our own chips, we can embed what we have learned by creating cutting-edge models and products directly into the hardware, unlocking new levels of capacity and intelligence.”

Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group: “Custom accelerators fit exceptionally well with scale-up and scale-out standardized Ethernet solutions to offer optimized AI infrastructure in terms of cost and performance. The racks incorporate Broadcom’s end-to-end portfolio of Ethernet, PCIe, and optical connectivity.”


What does “10 GW” mean for AI accelerators (and why Ethernet)

The volume of 10 GW provides a sense of the industrial scale that OpenAI manages: it refers to the total electric capacity planned for AI clusters over several years, distributed across own centers and partner data centers. In practice, this magnitude does not correspond to a single campus but encompasses multiple sites with racks integrating accelerators and networks connected via Broadcom Ethernet and optical links.

Choosing interconnection via Ethernet — instead of proprietary technologies — is a key point of the announcement. Broadcom calls it a cornerstone for standardizing scaling (both vertically and horizontally) of clusters: scale-up (consolidation and bandwidth within nodes and racks) and scale-out (growth across racks and domains). With Ethernet and PCIe as foundations, Broadcom affirms it can offer more open growth pathways, with cost/performance optimization and energy efficiency.


Timeline and scope: from 2026 to 2029, with “home-grown” racks

  • Design and systems: OpenAI defines the architecture of the accelerators and complete systems (racks), “embedding” insights from its frontier models and products.
  • Co-development and supply: Broadcom provides development, manufacturing, and its connectivity stack (Ethernet, PCIe, optical) to integrate these designs into productive racks.
  • Deployment: starting H2 2026 as a launch point, with a completion horizon by the end of 2029.
  • Scope: OpenAI facilities and partner data centers, indicating a multi-node mesh network for training and serving large-scale models.

The company emphasizes that existing co-development and supply agreements with Broadcom are in place, and both have signed a term sheet for rack deployments combining Broadcom’s accelerators and network.


Why is OpenAI designing its own accelerators?

OpenAI’s message is clear: integrating feedback from software into hardware. Creating custom accelerators — instead of buying off-the-shelf solutions — pursues three goals:

  1. Fine-tuning for real workloads: aligning microarchitecture, memory, interconnection, and software with the demands of pre-training, fine-tuning, and inference of frontier models.
  2. Roadmap control: ability to iterate alongside foundry/packaging partners and network suppliers without waiting for long cycles; prioritize what their own workloads require.
  3. Operational efficiency: aligning the entire stack (accelerator, Ethernet, PCIe, optical, software) to reduce latencies, bottlenecks, and energy costs per token or training step.

Just as Greg Brockman states, the value lies in “embedding learned insights” directly into silicon and systems to reach “new levels of capacity and intelligence.”


Broadcom: Ethernet as the backbone of the AI data center

For Broadcom, this collaboration reinforces two key points:

  • The importance of custom accelerators for next-generation AI workloads.
  • The choice of Ethernet as the technology for scale-up/scale-out in AI centers, thanks to its standardization, ecosystem, and economies of scale.

The company emphasizes a comprehensive “end-to-end” portfolio that includes high-performance Ethernet switching, PCIe interconnects within racks, and optical solutions for high-capacity, low-latency links between racks and pods.


A boost for the mission (and global demand)

OpenAI links its partnership with Broadcom to its mission of making general AI benefit “all of humanity,” positioning it as a step toward meeting the growing global demand for AI capacity. The company claims to have surpassed 800 million active weekly users, with strong adoption across businesses, SMBs, and developers, a level of usage that requires sustained infrastructure and predictable capacity.

While software has driven the product acceleration, the infrastructurepower, chips, network, data centers— remains the bottleneck. This announcement addresses that bottleneck with volume, timing, and standardization.


What the statement doesn’t say (and why it matters)

The press release does not specify specific nodes, deployment locations, nor accelerator parameters (foundry process, HBM memory, TDP, packaging). Nor does it clarify capex by phase or PUE targets for the racks. Such details are typically outlined in technical roadmaps and later updates once the term sheet advances into contracts and milestones.

The timeline —H2 2026 to late 2029— involves multiple cycles of architectural and manufacturing iteration, with clear supply risks related to materials, packaging, and HBM. It also depends on the energy and transmission infrastructure at the sites. The choice of Ethernet mitigates some risk by relying on mass standards but does not eliminate challenges in hyperscale orchestration.


Strategic insights: in-house capacity, open standards, and supply chain partners

  • Own capacity: The accelerators designed by OpenAI reduce dependence on external roadmaps for frontier models.
  • Open standards: Ethernet/PCE/optical as the base helps reduce total costs and supports scaling within an broad ecosystem.
  • Supply chain partners: The alliance with Broadcom ensures a single supplier of networks and end-to-end systems that co-develops with OpenAI, following proven models from the hyperscaler era.

For the market, the key message is clear: OpenAI not only competes in models and platforms, but also invests in the infrastructure that makes them possible. This strategy will influence competition over cost per inference/training and time to market in the coming years.


Frequently Asked Questions (FAQ)

What does it mean that OpenAI and Broadcom will deploy “10 GW” of AI accelerators?
It refers to the total electric capacity targeted for racks that incorporate accelerators and networks over several years (H2 2026–end of 2029), distributed among own centers and partner data centers. It’s not a single campus; it involves multiple locations with phased growth.

Why is Ethernet chosen for scale-up and scale-out in AI clusters?
The announcement highlights Ethernet for its standardization, ecosystem, and cost/performance. Combining custom accelerators with Ethernet and PCIe enables next-generation infrastructure optimization without relying on proprietary tech, facilitating scaling and operation.

When will racks start arriving, and when will deployment finish?
Deployments are expected to start in H2 2026 and complete by late 2029, with ongoing co-development and supply throughout that period.

What benefits does OpenAI gain by designing its own accelerators?
OpenAI claims it can “embed learned insights” from frontier models and products directly into hardware, which improves fit for workloads, control over the roadmap, and potentially efficiency (cost/latency/energy) at hyperscale.

Scroll to Top