Meta and Broadcom strengthen their partnership to create their own AI chips

Meta has clearly decided to accelerate its own AI chip strategy and will do so in partnership with Broadcom. Mark Zuckerberg’s company announced on April 14 an expansion of its collaboration with the American manufacturer to co-develop multiple generations of its MTIA (Meta Training and Inference Accelerator) accelerators, the custom chips aimed at improving performance, efficiency, and control over the infrastructure powering AI across its platforms. The partnership extends beyond silicon design to include advanced packaging and networking, two critical components for scaling large compute clusters.

This move is significant because it confirms that Meta no longer wants to rely solely on NVIDIA’s GPUs to sustain its AI growth. The company has been advocating for a “hardware portfolio” strategy, using different types of accelerators depending on the workload. In this framework, MTIA chips are primarily targeted at inference and large-scale ranking and recommendation systems—two essential tasks for Facebook, Instagram, WhatsApp, and Threads. In March, Meta had already revealed that it was developing and deploying four new generations of MTIA over two years, aiming to expand their use from recommendation to generative workloads.

The scale of the agreement helps explain why this news goes beyond a typical corporate announcement. Meta speaks of an initial deployment exceeding 1GW of compute capacity based on its custom silicon, as the first phase of a multi-gigawatt roadmap. Broadcom adds a key detail: the partnership will last for the next three years and will leverage its XPU platform, designed to create customized AI accelerators. In investor communications, Broadcom even presents this new phase as the deployment of the first 2nm AI compute accelerator within its collaboration with Meta. While no financial figures have been disclosed, it is clear that this is a multi-generational, large-scale investment.

Meta wants more control over its AI infrastructure

This decision aligns with a growing trend among hyperscalers: building more proprietary technology to reduce costs, optimize performance, and avoid full dependence on third-party roadmaps. Google has been working on its TPUs for years, AWS pushes with Trainium and Inferentia, Microsoft has introduced Maia, and now Meta aims to position MTIA as a much more central component of its architecture. It’s not just about saving money compared to NVIDIA’s GPUs, although that’s part of it, but about designing accelerators tailored to specific workloads and integrating more closely with its network and software. Reuters summarizes this strategy as Meta’s effort to contain the costs of its AI expansion while continuing to scale up capacity.

Broadcom contributes more than just manufacturing capacity in this equation. Its strength also lies in high-bandwidth networking and interconnect technology—crucial layers as AI clusters grow larger and more complex. Meta emphasizes in its announcement that Broadcom’s advanced Ethernet technology will facilitate smooth connections among these new clusters. In other words, this isn’t just about chips; it’s about building the entire foundation on which Meta will deploy its next-generation AI infrastructure.

Hock Tan steps down from Meta’s board

The announcement also includes an unusual corporate governance development. Hock Tan, CEO and President of Broadcom, will step down from Meta’s board of directors and will instead take on an advisory role related to proprietary silicon roadmaps and infrastructure investments. Meta describes this as a logical transition given the scale of the partnership and Broadcom’s technical influence in this new phase. Reuters further reports that another board member, Tracey Travis, will not seek re-election at the upcoming shareholder meeting, though this departure isn’t directly linked to the industrial partnership.

Fundamentally, the most interesting aspect is that Meta appears to be moving from a phase of experimentation with proprietary chips to a much more ambitious mass deployment. Its initial MTIA chips were already in use for some workloads, but the plan announced in March, now reinforced with Broadcom, points toward a much faster acceleration. If successful, Meta will not only gain efficiency in recommendations and inference tasks but also achieve greater independence over a critical part of the AI value chain. In a market where infrastructure has become just as critical as models for competitive advantage, this could make a significant difference relative to other industry giants.

Implications for the market

The partnership sends a message to the industry. First, that the AI race will not be fought solely through models and end products, but with custom silicon, specialized networks, and the capacity to deploy all at gigawatt scale. Second, that Broadcom continues to solidify its position as a key winner in the AI wave—not only through connectivity and switching business but increasingly through customized accelerators for major cloud clients. And third, that Meta wants to compete at that level with much greater vertical integration than before. While public data on how much this will reduce its dependence on NVIDIA isn’t available yet, an unmistakable sign is that Meta no longer wants to be just a large consumer of AI infrastructure but also a major designer.

Frequently Asked Questions

What are Meta’s MTIA chips?
They are Meta’s proprietary accelerators, called Meta Training and Inference Accelerator, designed to run AI workloads more efficiently, especially inference and large-scale recommendation systems.

What role will Broadcom play in this partnership with Meta?
Broadcom will collaborate with Meta on chip design, advanced packaging, and high-bandwidth networking, leveraging its XPU platform to accelerate multiple future generations of MTIA.

Is Meta trying to eliminate dependence on NVIDIA with this deal?
Meta hasn’t said it will abandon NVIDIA entirely, but it is strengthening its internal silicon strategy to optimize costs and performance for certain AI workloads. Reuters interprets the partnership as a step toward reducing reliance on external GPUs.

What does the initial deployment of over 1 GW of proprietary silicon mean?
It indicates that Meta plans a large-scale deployment based on its MTIA chips in the first phase, as part of a broader multi-gigawatt roadmap to support its AI ambitions.

via: about.fb

Scroll to Top