Intel doubles down on AI with giant HBM encapsulations

Intel wants to gain influence in the artificial intelligence race through a different approach than pure manufacturing nodes: advanced packaging. According to information published by the South Korean media ETNews, Intel Foundry is preparing 120 × 120 mm packages for AI chips, a format designed to incorporate more logic and, especially, more HBM memory into a single assembly. The key point is that this leak aligns with the roadmap that Intel itself has already begun to showcase in official documentation for AI and HPC customers.

The strategic interpretation is clear. In the current accelerators market, the bottleneck is no longer just about manufacturing the most advanced chip, but also about being able to join large silicon blocks, memory, and I/O within increasingly bigger, more expensive, and more challenging-to-produce packages. This is where Intel believes it can compete against TSMC’s dominance in CoWoS, a technology that remains the industry standard but also faces capacity and cost tensions as demand for AI chips grows.

It’s important to make a key clarification: moving from 100 × 100 mm to 120 × 120 mm does not mean a surface area increase of 20%, but rather 44%. That jump helps explain why this news is so significant. It’s not just a small format adjustment but a substantial growth in available area to mount more chiplets, additional memory stacks, and a much more complex interconnection network. The potential consequence is straightforward: higher bandwidth, greater capacity, and, if all goes well, improved performance for AI workloads.

The real business isn’t just in silicon

For months, Intel has emphasized that the future of AI will not be solely about more transistors, but rather a combination of process technology, power, memory, and packaging. At its Intel Foundry Direct Connect 2025 event, the company highlighted EMIB-T as one of its new packaging strategies to meet future needs for high-bandwidth memory. Shortly after, in technical materials tailored for AI and HPC, Intel detailed a roadmap with complexes over 8 times reticle size in 2026, with packages around 120 × 120 mm capable of holding 12 HBM stacks.

This detail is relevant because it places the current year’s announcement within a pre-existing strategy, not as a last-minute development. Intel has also shared even more ambitious projections for 2028, with complexes over 12 times reticle size and even larger packages. The revised documentation includes estimates ranging from configurations with 16 or more HBM4/HBM5 stacks to roadmaps targeting over 24 stacks in formats larger than 120 × 180 mm, always as plans subject to change.

The critical aspect is not just the final number of HBM stacks, but what it entails to manufacture such packages. Increasing package size complicates thermal management, mechanical stability, power delivery, and manufacturing yield. In such large encapsulations, issues like substrate warpage, signal integrity, or voltage drops cease to be engineering details and become key factors in industrial viability. Intel is aware of this and is promoting EMIB-T as a targeted solution to these challenges.

What EMIB-T offers and why Intel considers it a strategic advantage

EMIB, Intel’s embedded silicon bridge technology, is not new. The company has been using it in production since 2017 and presents it as an alternative to large interposers. The EMIB-T version adds TSVs (through-silicon vias) to improve power delivery and facilitate high-speed die-to-die links with HBM4 and beyond. Intel claims that this architecture also allows converting designs from other packaging approaches with less redesign than expected.

Intel’s twofold advantage over CoWoS is clear. First, EMIB-T avoids dependence on a large silicon interposer beneath the entire package, which can increase costs and complexity as size increases. Second, it enables placing interconnect silicon only where necessary, with a structure Intel sees as more efficient for large-format packages. In a technical blog from March 2026, the company mentioned that EMIB-T can provide wafer utilization and cost advantages specifically for these giant AI packages.

This doesn’t mean Intel has completely solved advanced packaging issues or can displace TSMC overnight. TrendForce notes that CoWoS remains the leading platform today and will likely continue to be the main solution for high-bandwidth products from NVIDIA and AMD in the near term. However, it also observes that the growth of AI is prompting parts of the market to explore alternatives like EMIB due to capacity, size, and cost limitations.

Where do NVIDIA and AMD fit into this?

Time to temper the excitement. Neither Intel, NVIDIA, nor AMD has announced any specific agreements for these future GPUs or accelerators to adopt this 120 × 120 mm packaging. What exists now is a mix of industrial logic, technical roadmaps, and market speculation. Intel talks about open technologies for external clients and the possibility of mixing chiplets from different foundries. It also clarifies that it can offer packaging services even if the silicon is not manufactured by Intel. But that does not confirm any customer relationship.

The hypothesis regarding NVIDIA makes sense because the demand for memory and bandwidth continues to grow. The Blackwell generation already uses eight HBM3e stacks, and NVIDIA has shown that Rubin will make a big leap in bandwidth with HBM4. Parallelly, TrendForce indicates that market pressure is leading to consideration of larger packages and exploring alternatives to CoWoS for future accelerators or ASICs. Yet, linking this move to a specific NVIDIA GPU or a potential contract with AMD is premature.

Still, the core idea behind the news seems solid. Intel recognizes that advanced packaging is becoming an increasingly valuable part of the AI supply chain. If top node manufacturing remains dominated by a few players, large-format packaging could become the next battleground. Successful execution may give Intel a more credible entry point into the AI infrastructure of the coming decade, even if it’s not the primary manufacturer of the die itself.

Frequently Asked Questions

What does it mean that Intel wants to produce 120 × 120 mm packages for AI?

It means Intel aims to offer much larger encapsulations to integrate more chiplets and HBM memory into a single package. This enables designing more ambitious AI and HPC accelerators, though it also increases thermal, electrical, and mechanical complexity.

What is EMIB-T and how does it differ from regular EMIB?

EMIB-T is an evolution of EMIB that adds TSVs to the silicon bridge and improves vertical power delivery. Intel specifically targets HBM4, HBM4e, and very high-speed die-to-die links in large packages for AI applications.

Has Intel confirmed that NVIDIA or AMD will be customers of this packaging?

No. Intel has confirmed the technology and its packaging roadmap but has not publicly announced that NVIDIA or AMD will adopt these particular packages. Any relationship with future products from those companies remains speculative.

Why is advanced packaging so important for AI chips?

Because modern accelerators rely on combining high computational capacity with large amounts of HBM and very high-bandwidth internal links. Without advanced packaging, it’s not enough to have a cutting-edge node; the chip simply cannot scale to meet the demands of current AI workloads.

via: etnews

Scroll to Top