OpenAI has confirmed that its first custom chip for artificial intelligence (AI) will be ready by 2026, developed in collaboration with Broadcom and manufactured by TSMC. This decision marks a shift from OpenAI’s initial strategy, which had considered building its own network of factories to produce chips, ultimately opting for partnerships with major industry manufacturers instead.
The Challenge of Power and Supply Diversification
As one of the leading AI companies, OpenAI requires immense computing power to train and operate models like ChatGPT. Until now, the company has primarily relied on Nvidia’s GPUs, especially for training its models, where AI learns from data and applies that knowledge to new predictions. However, the high demand for these chips has created challenges in availability and costs, prompting OpenAI to explore alternatives.
Currently, the company is diversifying its chip supply through a combination of purchases from Nvidia, AMD, and now its own designs. OpenAI plans to utilize AMD chips via Microsoft Azure’s cloud infrastructure in response to supply issues and high costs associated with Nvidia’s GPUs. According to Reuters, OpenAI is also evaluating other agreements to continue expanding its network of partners and hardware alternatives.
Broadcom and TSMC: Key Allies for an Inference Chip
The collaboration with Broadcom is aimed at developing a chip specifically for inference tasks, which is when AI applies learning to generate outcomes, as opposed to the training process. While the current demand for training chips is greater, the need for inference chips is expected to grow in the coming years as AI is applied in more contexts and applications.

Broadcom, which already collaborates with tech companies like Google in chip design, will assist OpenAI in optimizing the design of its inference chip and ensuring manufacturing capacity with TSMC. The latter, recognized as a leader in semiconductor production, has been chosen to produce OpenAI’s first chip.
AMD and Nvidia Competition in the AI Market
OpenAI also plans to incorporate the new MI300X chips from AMD into its infrastructure through Azure, expanding its access to alternatives to Nvidia, which currently controls over 80% of the AI chip market. AMD’s efforts to capture a portion of this market are significant, with projected revenues of $4.5 billion in AI chip sales for 2024.
This diversification strategy could help OpenAI reduce its dependence on a single supplier and address the availability challenges that the market faces. Computing costs, which encompass hardware, electricity, and cloud services, are currently the largest financial burden for OpenAI, motivating its quest to optimize resource use and collaborate with different manufacturers.
An In-House Team of Chip Experts
As part of this project, OpenAI has assembled a team dedicated to developing its chip, consisting of approximately 20 engineers and semiconductor experts, some of whom have experience creating tensor processing units (TPUs) at Google. This step allows OpenAI to work more strategically on the design of chips tailored to its specific AI needs.
Future Outlook
This hybrid approach by OpenAI, which combines creating its own chip with acquisitions from AMD and Nvidia, sets a precedent in the tech industry. As the demand for AI continues to rise, along with processing needs, OpenAI’s move could serve as a guide for other companies looking to manage the high costs and complexities of AI hardware.
With the backing of Broadcom and TSMC, OpenAI aims to gain a competitive edge starting in 2026, the year its first AI chip will be operational. In the meantime, the company continues to explore options to solidify its position as a leader in the technological race for artificial intelligence.