Intel aims to enter Google’s TPU market, but the challenge lies in packaging

The possible involvement of Intel in Google’s upcoming TPU supply chain once again shifts focus to a part of the semiconductor industry that until recently was outside the main conversation: advanced packaging. It’s no longer enough to just manufacture the chip on a cutting-edge node. In the era of Artificial Intelligence, actual performance also depends on how multiple dies, high-bandwidth memory, substrates, and internal links within the same package are connected.

Analyst Ming-Chi Kuo has quantified this challenge regarding EMIB-T, an evolution of Intel’s packaging technology. According to his analysis, Intel has achieved a 90% success rate in technological validation—a positive sign but still far from the 98% that Kuo considers necessary as a benchmark for volume production, compared to the standard performance of FCBGA. The difference may seem small, but in advanced manufacturing, it’s significant.

Google aims to cut costs against NVIDIA

The industry interpretation is clear: Google is seeking more control and lower costs for its AI accelerators. Its TPUs have become a central component for training and deploying models within its infrastructure, competing in cloud services against NVIDIA. If Google can improve the cost per chip, per rack, or per token served, it gains margin in a market where compute demand continues to grow and GPU availability remains a strategic factor.

In recent months, reports have indicated a redesign of Google’s TPU supply chain. Reuters already noted in 2025 that Google was preparing a collaboration with MediaTek for a new generation of TPUs, expected to enter production the following year, partly due to MediaTek’s relationship with TSMC and potential cost advantages over Broadcom. More recently, discussions between Google and Marvell have also been reported, focused on chips aimed at inference.

Kuo’s comments add another layer: Google may be evaluating how much it can save by doing the tape-out of the main compute die for Humufish directly, rather than through MediaTek. In simple terms, tape-out refers to the final design phase sent to manufacturing. If Google aims to eliminate intermediate margins at this stage, the message is clear: the company is trimming costs to the maximum.

This shift makes sense. While hyperscalers develop their own chips to reduce dependence on NVIDIA, savings are no longer measured solely by the cost of each accelerator. They are measured in millions of units, data center capacity, power consumption, memory availability, and operational costs over years. On that scale, a seemingly small improvement can change the economics of an entire hardware generation.

EMIB-T: why 90% performance isn’t enough

EMIB, short for Embedded Multi-die Interconnect Bridge, is Intel’s technology for connecting multiple chiplets within a package via tiny silicon bridges embedded in the substrate. Unlike solutions based on a full silicon interposer, as used in many 2.5D packages, EMIB aims to offer high-density connections with lower cost and greater design flexibility.

EMIB-T adds the use of TSVs, or Through Silicon Vias, which are vertical conduits passing through silicon to improve conduction and enable more demanding designs. The promise is attractive for large AI accelerators, where multiple dies, memory, power, and high-speed links must be integrated without significantly increasing cost or complexity.

The challenge lies in manufacturing yield. A 90% success rate in technological validation sounds promising, especially for complex technology. However, Kuo emphasizes two points: validation yield is not the same as production yield, and moving from 90% to 98% can be more difficult than going from zero to 90%. In advanced packaging, each percentage point matters because packages are costly, complex, and contain high-value components.

The comparison with FCBGA helps clarify the standard. If EMIB-T aims to replace or complement mature solutions in mass production, it cannot settle for a yield acceptable only for prototypes or validation. It needs to approach the reliability, repeatability, and cost levels expected by clients like Google. For a TPU deployed at scale, low performance isn’t just more expensive; it also complicates capacity planning, delivery, and scaling.

TSMC, MediaTek, and Intel: a more complex supply chain

The potential involvement of Intel does not mean TSMC will disappear from the scene. In fact, the scenario described by Kuo points toward a more fragmented and strategic supply chain. TSMC would continue playing a critical role in advanced nodes, while Intel would compete in the packaging space with EMIB-T. MediaTek, meanwhile, likely acts as a design partner and possible intermediary at certain phases.

Kuo suggests TSMC might reserve some capacity for Humufish in the second half of 2027. The reason isn’t just demand but also uncertainty around the actual performance of the subsequent packaging. If the backend cannot deliver sufficient volume, reserving too much advanced process capacity could be an inefficient allocation of scarce resources.

This is a key point. In advanced semiconductors, bottlenecks are no longer always in the wafer. They can be in CoWoS, EMIB, substrates, HBM memory, interconnects, electrical capacity, thermal validation, or final assembly. For AI chips, the entire supply chain is only as strong as its most limited link.

TSMC dominates much of the advanced packaging with CoWoS, a technology that has become essential for AI accelerators. Intel is trying to carve out space with EMIB and related solutions, emphasizing cost, modularity, and scalability. If Intel can demonstrate yields close to mass production, it could gain a notable role as an alternative or complement in very large designs.

Why this debate matters for the AI market

The story isn’t just about Google and Intel. It reflects a broader industry tension: AI is pushing major buyers to rethink hardware economics. NVIDIA retains a significant advantage in GPUs, networking, software, and ecosystem, but their largest clients also seek proprietary alternatives to reduce dependency, improve margins, and tailor chips to their internal workloads.

Google has been developing TPUs for years. Amazon promotes Trainium and Inferentia. Microsoft is working on Maia. Meta designs its own accelerators. Broadcom, Marvell, MediaTek, and others compete to develop AI ASICs for hyperscalers. Intel, striving to recover ground in foundry and packaging, has a real opportunity if it can demonstrate industry-level technology, cost, and performance.

Advanced packaging will be a decisive battleground. AI models demand more memory, bandwidth, and chip-to-chip communication. As accelerators grow beyond the limits of a single die, how the pieces are interconnected becomes as critical as the transistors themselves. EMIB-T, CoWoS, and other solutions are not simply engineering details—they are the technologies that determine who can build large, efficient, and cost-effective AI accelerators.

Caution remains necessary. Currently, this is an analysis of supply chains and a developing technology, not a confirmed contract from Google or Intel. Moreover, Kuo himself warns that the 90% figure should be interpreted cautiously. There is a significant gap between validating a technology and deploying it in volume, as required for Google’s TPU.

Nevertheless, the move makes sense. If Google wants to compete directly with NVIDIA on cost and capacity, it cannot focus solely on chip design. It must control the entire supply chain—design, manufacturing, packaging, performance, cost, and available capacity. In this context, EMIB-T could be an opportunity for Intel but also a rigorous test. The jump from 90% to 98% yield will determine whether the promise remains in validation or becomes a real business opportunity.

Frequently Asked Questions

What is EMIB-T?
EMIB-T is an evolution of Intel’s EMIB technology for advanced chip packaging. It uses embedded bridges and TSV vias to connect dies within the same package with high density and improved conduction.

Why does the 90% vs. 98% performance matter?
Because a 90% validation yield might be promising during development, but mass production demands much higher levels. For costly and complex AI chips, each failure increases costs and reduces the effective volume available.

What is Humufish?
Humufish is presumed to be the codename for a next-generation Google TPU, based on supply chain analysis and reports. It is not an officially announced product with complete specifications.

Can Intel replace TSMC in Google’s TPUs?
Not necessarily. The scenario suggests more of a hybrid chain: TSMC would continue manufacturing at advanced nodes, while Intel would compete in the packaging segment with EMIB-T.

Scroll to Top