Moore Threads Joins the “AI PC” Race with Yangtze: a SoC with 50 TOPS NPU and Up to 64 GB of LPDDR5X

The “AI PC” craze is no longer just a competition between Intel, AMD, Qualcomm, or Apple. In China, Moore Threads (previously known mostly for its focus on GPUs and the MUSA ecosystem) has introduced Yangtze, a SoC aimed at laptops and mini PCs that seeks to provide the “full package” for modern workloads: CPU, integrated GPU, NPU, and a multimedia block ready for high-resolution video.

According to the information released after its unveiling, Yangtze features an 8-core CPU with a maximum frequency of 2.65 GHz, and the most attention-grabbing component for the AI PC marketing: a 50 TOPS (INT8) NPU, designed to accelerate tasks such as voice and image recognition, and generally, AI functions performed locally.

A SoC Designed for “All-in-One”: NPU, iGPU, and Multimedia Engine

The announcement describes Yangtze as a fully integrated solution, which is crucial for the portable and mini PC market, where the balance between performance, power consumption, and cost takes precedence over traditional desktops.

In this approach, Moore Threads highlights three key components:

  • NPU (50 TOPS): Designed as a multi-core neural engine to accelerate inference in common AI tasks.
  • iGPU: Aiming to cover both 3D rendering and acceleration of language models (LLMs) and video tasks.
  • VPU / Multimedia Engine: Supporting 8K at 30 FPS and 4K at 60 FPS, along with compatibility for H.265, H.264, and AV1.

This “core” is complemented by other typical blocks found in modern SoCs, such as a DPU for multi-screen scenarios (mentioning the possibility of dual 8K or even eight 4K, depending on interfaces), a DSP with functions like noise reduction and audio effects, and an ISP supporting camera modules up to 32 MP and HDR.

First Devices: MTT AIBook and MTT AICube

Moore Threads hasn’t stopped at just unveiling the chip. They’ve also introduced platform designs to bring it to users—and especially to integrators in the Chinese domestic market: a laptop called MTT AIBook and a mini PC MTT AICube.

Both models feature configurations with 32 GB and 64 GB of LPDDR5X memory, with bandwidth exceeding 100 GB/s. This is significant because, in “local AI” devices, memory (capacity and bandwidth) can be the real practical limit—especially when dealing with large models, extended contexts, or multimodal pipelines.

What Moore Threads Is Trying to Achieve (Beyond Just Numbers)

Beyond the impact of a number like “50 TOPS,” Moore Threads’ move aligns with a clear goal: to control a complete AI PC stack within the Chinese market, reducing dependence on foreign solutions in a segment that’s headed toward becoming an industry standard.

The company hasn’t publicly disclosed internal architecture or specific CPU/GPU IPs, but positions the SoC as a “competitive” alternative within the 8-core category, emphasizing efficiency and low power operation. Practically, this suggests the goal isn’t necessarily to dominate global benchmarks tomorrow, but to build a strong installed base and an accompanying software ecosystem.

Comparison Table: Three Approaches to Bring AI to the PC

ApproachWhat it OffersTypical AdvantagesTypical LimitationsIdeal For
SoC with Integrated NPU (like Yangtze)Local AI within the chip itselfLower latency, reduced power consumption, complete integrationLess thermal margin than desktops; NPU and memory may limit large modelsLaptops and mini PCs with “always-on” AI functions
CPU + Generalist iGPU without a powerful NPUAI via general CPU/GPUWide compatibility; simplicityLower sustained AI efficiency; higher load on CPU/GPUGeneral use with occasional AI
PC with Dedicated GPULocal AI with high computing power and VRAMBetter for large models and heavy tasksHigher power, cost, and thermal requirementsDevelopment, creatives, labs, demanding local inference

Underlying Message: The AI PC Is Becoming a Market “Format”

The strategic takeaway is clear: “AI PC” is evolving into a branding label… but also a set of technical requirements (NPU, video, memory, software) that encourages manufacturers to act. Moore Threads, with Yangtze, aims to stake a claim right there: an all-in-one SoC for consumer devices, with built-in AI acceleration and platform design ready for commercial products.

If the challenge for GPUs was competing against well-established ecosystems, the AI PC adds another layer: making everything work smoothly in everyday use (drivers, frameworks, app compatibility, sustained performance, and efficiency). The board is set; now, the real test will be deployment in actual devices—and, most importantly, adoption.


Frequently Asked Questions

What does it mean for an NPU to have 50 TOPS?
It’s a way to measure AI compute capability (usually in INT8). It provides a reference point but doesn’t guarantee final performance: memory, software, model types, and efficiency all play roles.

What’s the purpose of an “AI PC” SoC in a laptop or mini PC?
To run AI tasks locally (voice, images, assistants, video/audio enhancement, automation) with lower latency and less dependence on cloud services, while also reducing power compared to relying solely on CPU/GPU processing.

Why is memory (32/64 GB LPDDR5X) so important for local AI?
Because many models and workflows demand capacity and bandwidth; without sufficient memory, the experience degrades—even if the NPU is powerful.

Scroll to Top