AMD invests $36 million in “surgical” acquisitions to bolster its data center business and prepares for the Intel–NVIDIA pact pressure

AMD outlined in its latest 10-Q filing with the SEC that, in addition to acquiring ZT Systems for $4.9 billion, it allocated $36 million in 2025 for smaller acquisitions aimed at strengthening its AI and data center strategy. The Santa Clara manufacturer is executing two parallel moves: ramping up rack-scale muscle with ZT and acquiring specific technological components — photonics, AI compilers, and design talent — to accelerate its roadmap against NVIDIA and increasingly, the new Intel–NVIDIA alliance.

The announcement follows a record quarter in which AMD reported $9.2 billion in revenue, driven by growth in its CPUs for PCs and servers (EPYC) and the progress of its Instinct data center GPUs. However, the company acknowledges in the same document that the agreement announced in September between Intel and NVIDIA — to co-develop “multiple generations” of PC and data center products — may lead to more competition and price pressure, potentially impacting margins.

What AMD has bought for $36 million (and why it matters)

While the financial breakdown does not specify amounts per operation, AMD has recognized three strategic moves — all complementing ZT Systems — focused on accelerating data center AI:

  • Enosemi (photonics): strengthening co-packaged optics and photonics for next-generation AI systems. In a world where the bottleneck is called interconnection, shifting from electrical links to optical within the rack — and, in the medium term, within the package itself — is key to reducing latencies, increasing bandwidth per watt, and containing total consumption. The acquisition aligns with AMD’s DGX-like or rack-scale AI plans with ZT.
  • Brium (compilers): direct investment in toolchains to compile and optimize AI kernels. The battle isn’t won solely with FLOPS; it’s won with compilers, runtimes, and graphs that maximize hardware utilization. Developing proprietary capabilities — and not relying solely on open ecosystems — enhances ROCm performance and reduces the “gap” against NVIDIA’s CUDA stack.
  • Integration of Untether AI teams: acquiring hardware and software engineering experts specialized in compilers/kernels and digital design/SoC. More than a product purchase, this is an investment in talent to accelerate verification, integration, and performance of Instinct and future IP.

These three vectors — photonics + compilers + talent — outline AMD’s approach: eliminate bottlenecks limiting true scaling (memory and interconnection), boost performance through software (optimized compilation), and accelerate execution (teams with AI chip expertise). If ZT Systems is the “chassis” at rack scale, these acquisitions are the finesse mechanics transforming the chassis into a competitive platform.

ZT Systems: the “rack-scale” anchor and operation with Sanmina

The acquisition of ZT Systems has become the enabler for AMD to deliver rack-scale AI solutions built on Instinct. The company already reports meaningful wins — with OpenAI among notable customers — and has reconfigured its asset perimeter: in October, AMD sold ZT’s manufacturing unit to Sanmina for $3 billion, while retaining design teams and customer enablement. The clear message: AMD aims to design, integrate, and scale full solutions, partnering for manufacturing without relinquishing control of the roadmap or customer experience.

A record quarter… with geopolitical caution

In its 10-Q, AMD celebrates historic revenues of $9.2 billion in Q3 2025, with notable jumps in EPYC and Instinct. But it immediately issues a warning: the Intel–NVIDIA alliance could become a catalyst that intensifies competition, with potential pricing pressures and margin impacts. This serves as a caution for the market: with NVIDIA dominating GPUs and Intel solidifying CPUs, the potential duopoly in combined offerings forces AMD to differenciate quickly in performance, software ecosystems, and scale delivery.

From the Client OEM division, AMD maintains a confident outlook on its roadmap. Business focus revolves around two pillars: Instinct for data center AI, and CPUs/APUs — including the “Strix Halo” family — pushing performance in laptops and desktops, with a lineup that narrows the gap with hybrid CPU-GPU offerings from competitors.

Photonics, compilers, and talent: the three levers of the “data center plan”

The trajectory illuminated by smaller acquisitions this year reveals the levers AMD considers critical:

  1. Photonics and CPO (co-packaged optics): The energy cost of electric switching and the density required by modern AI demand advancing integrated optics ahead of schedule. With Enosemi, AMD gains design capacity in transceivers, SerDes-optical, and CPO architectures — essential components to prevent the Instinct + ZT platform from choking on interconnect. In rack-scale terrain, moving from hundreds of Gbps per link to multi-Tbps sustained with better pJ/bit is the difference between a scaling-capable system and one that collapses as nodes increase.
  2. AI compilers and toolchains: The competition with CUDA plays out across three layers: compilation, kernels, and frameworks. The acquisition of Brium and the addition of engineers from Untether AI strengthen the link where AMD can gain most via software: optimal graph mapping, operator fusion, scheduling, and finetuning by architecture. The more performance AMD unlocks via compiler optimizations, the less it relies on silicon wins alone and the more attractive its proposition becomes for customers with mixed workloads.
  3. Verification talent and SoC expertise: Exa-scale AI roadmaps leave no room for delays. Bringing in teams with craftsmanship in digital design, DV, and product integration reduces risks in time-to-market, increases the likelihood of first-run yield, and facilitates alignment across IPs (GPU, interconnect, memory) and ZT’s rack-scale architecture.

What changes with the Intel–NVIDIA pact?

Beyond the headline, what worries AMD is the potential for market and engineering coordination that could emerge from this pact. A tandem combining NVIDIA GPUs with Intel CPUs (and potentially DPU/Networking and integrated software) could produce packages difficult to beat at large scale if no end-to-end alternative of similar caliber exists. The clear risk is the shift from “component-to-component” competition to “solution-to-solution,” where volume discounts and integration outweigh isolated specifications.

AMD’s response involves **doubling down** on two fronts:

  • Turnkey “rack-scale” AI offerings with ZT, based on Instinct, and reinforced with photonics and optimized runtimes.
  • Software and development: a more competitive ROCm, improved compilers, libraries, and certified ISVs to shorten adoption cycles and match the functionality of leading frameworks.

Signals for channels and large customers

For integrators, median and large hyperscalers, the key messages are three:

  • Roadmap and stability: the $9.2 billion revenue figure and the push of EPYC + Instinct point to a favorable cycle, but competitive pressures could shift pricing. The channel should manage expectations and margins carefully if the rival alliance tightens.
  • Scalable solutions: the ZT + Instinct combo, plus enhancements in photonics and compilers, suggest more mature deployments coming in 2025–2026. Proofs of concept should emphasize sustained throughput, latency, token power consumption, and deployment time, rather than just peak TFLOPS.
  • Software ecosystem: validate the parity of frameworks and the quality of toolchains (compilers, kernels, libraries) in real workload scenarios. In AI, paper benchmarks don’t pay the bills if the stack is not performant in practice.

Risks and opportunities (what the 10-Q hints at between the lines)

Risks

  • Pricing pressure: If Intel and NVIDIA pass synergies through pricing, AMD will need to defend margins through value (actual performance, TCO, software) or via discounts that impact profitability.
  • Execution: Integrating ZT, refining photonics, and consolidating toolchains could strain resources; benefits may be delayed if industrial ramp-up is complicated.
  • Volatile market: Big customer decisions (Cloud/AI labs) can pivot due to geopolitical or energy reasons, rather than technical factors.

Opportunities

  • Credible CUDA alternative: If ROCm and new compilers narrow the gap, AMD’s appeal increases in cost, flexibility, and sovereignty, attracting clients wary of “vendor lock-in.”
  • Competitive rack-scale: With ZT and its factory sale to Sanmina, AMD remains agile, focusing on design, integration, and support, while partnering for manufacturing.
  • Photonics/CPO: The entity that delivers better interconnects with lower pJ/bit and latency early on will gain a decisive edge in training and scaled inference.

Frequently Asked Questions

Which “smaller” acquisitions did AMD make in 2025, and what purpose do they serve?
AMD invested $36 million across three initiatives: Enosemi (photonics and co-packaged optics), Brium (AI kernel compiler tools), and integrating the Untether AI team (specialized in compiler/kernel engineering and SoC design). Combined, they bolster interconnection, software, and execution for its Instinct data center platform.

What is the main goal behind acquiring ZT Systems?
ZT provides rack-scale design and customer enablement to deploy rack-scale AI solutions based on Instinct GPUs. AMD has already capitalized on customer wins (including OpenAI) and outsourced manufacturing to Sanmina ($3 billion sale), focusing on design and integration.

How might the Intel–NVIDIA alliance affect AMD’s data center business?
AMD recognizes this alliance may heighten competition and push price pressures. A combined solution with Intel CPUs and NVIDIA GPUs (and possibly DPU/Networking and integrated software) creates compelling packages that are hard to beat at scale, forcing AMD to differentiate via performance, software ecosystems, and scale delivery (ZT + Instinct) to maintain margins and market share.

In what ways is AMD challenging NVIDIA in AI infrastructure?
Through a dual approach: hardware + integration (Instinct + ZT for rack-scale) and software (ROCm, compiler/kernel tools via Brium/Untether AI). The goal is to close the gap with CUDA, provide a competitive and open alternative, and improve TCO with comprehensive solutions and mid-term photonics.

via: CRN

Scroll to Top