Moore Threads Wants to Win the “Copilot” War with a Full Chinese Tech Stack

Moore Threads, one of the companies poised to lead China’s alternative to the Western giants in AI computing, is now making a leap beyond silicon. The Beijing-based company has introduced its own AI-assisted programming service—called “AI Coding Plan”—with a clear vision: not just selling GPUs, but also controlling the software layer that turns raw computing power into daily productivity for developers.

This move comes at a time when programming with language models has become a strategic battleground. In the West, tools like GitHub Copilot have normalized assistant-driven editing; simultaneously, solutions are emerging that compete to be “the interface” of modern programming. In China, the stakes are twofold: on one side, competing in experience and outcomes; on the other, reducing reliance on external entities in a particularly sensitive area shaped by trade restrictions and tech geopolitics.

From Chip to Keyboard: Why Moore Threads Wants Control of the Experience

Moore Threads’ thesis is that the competitive advantage no longer resides solely in raw hardware performance. Value is shifting toward complete platforms: infrastructure + models + development tools + integration with real workflows. Under this logic, “AI Coding Plan” is presented as a vertical development suite supported by domestic hardware and a model trained for programming tasks.

The service leverages the MTT S5000 GPU, based on Pinghu architecture. According to publicly available information about the company, this chip entered production in 2025 and has become a key driver of the group’s commercial success, with Moore Threads forecasting revenue growth of approximately threefold driven by S5000 adoption in AI-oriented clusters. This aligns with market realities: when access to leading accelerators becomes more complex or costly, software that makes local computing “useful” gains importance in purchasing decisions.

In other words: selling GPUs is important, but enabling developers to write, debug, and deploy code faster—without leaving your platform—is what creates habit. And habit, in technology, often leads to lock-in.

Compatibility as a Hook: Entering Without Forcing a “Start from Zero”

A key aspect of the announcement is the promise of compatibility with popular tools. Moore Threads has indicated that its plan can coexist with existing environments and workflows familiar to developers, mentioning integration or support with widely used utilities in the “AI coding” ecosystem. This strategy aims to achieve a practical goal: minimizing the psychological and technical costs of switching. In a market where the editor, plugins, and shortcuts are part of professional identity, forcing a sudden migration often results in losing users quickly.

This move also has an industry reading. If hardware remains domestic but developers keep their workflows, adoption can accelerate. And if adoption grows, the hardware’s justification becomes stronger. It’s a virtuous circle… provided performance and reliability meet expectations.

The Role of the Model: GLM-4.7 as a Competitive Advantage

At the model layer, the platform relies on GLM-4.7, developed by Zhipu AI (Z.ai). Public documentation and statements from Z.ai describe GLM-4.7 as a model designed for real development scenarios, highlighting its high placement in programming evaluations; they also cite competitive results in industry benchmarks and strong performance in software engineering tests. The core message is clear: the “copilot” is no longer just about autocompleting lines but about solving longer tasks, navigating codebases, and acting as an agent executing steps. In this transition, model quality is as critical as computational power.

A Boiling Market: Alibaba, ByteDance, and the “New Front-End” of Software

Moore Threads is not entering a barren market. Alibaba has long been strengthening its presence in assisted programming with Tongyi Lingma and its Qwen models tailored for code. The company has stated that its assistant has generated billions of lines of code since launch—a metric of adoption and, more importantly, a signal to businesses that this is no longer experimental.

Domestic competitors intersect with Western players, but also with a broader phenomenon: IDEs and editors have become distribution channels. Whoever controls the point where developers spend hours daily largely controls how software is built. Therefore, the battle is not just about “who programs better” but about who becomes the de facto standard in enterprise workflows.

Strategic Implications: Sovereignty, Margins, and a New Way to Sell GPUs

Ultimately, Moore Threads’ announcement suggests a shift in identity—from chip manufacturer to platform provider. This move can improve margins (services and subscriptions tend to be more predictable than one-off hardware sales) and reinforce a narrative of tech independence. It also introduces new responsibilities: support, community engagement, security, product evolution, and, above all, building trust.

Because if AI-assisted development becomes core to daily work, any failure—hallucinations, subtle errors, code leaks, or data mishandling—ceases to be “lab side” issues and becomes a production concern.

Nonetheless, this aligns with industry trends: AI infrastructure is being “productized” and layered closer to end-users. Those who arrive first with a comprehensive and reasonably open experience could carve out a difficult-to-dislodge position.


Frequently Asked Questions (FAQ)

What is an “AI Coding Plan,” and how does it differ from traditional code autocompletion?
An “AI coding” plan typically includes generation, debugging, refactoring, and execution of longer tasks (such as changes across multiple files), not just line-by-line suggestions within the editor.

Why is it so important for the platform to be compatible with popular development tools?
Because it reduces adoption friction: a team can try the assistant without reworking their workflow, plugins, or configurations—speeding up pilots and deployments.

What are the advantages for companies using a domestic stack (hardware + model + tools)?
Mainly, control over infrastructure, reduced dependence on third parties, and in certain environments, more options for compliance, data residency, or internal security policies.

Can an AI code assistant replace a development team?
Practically, today they are used to boost productivity: automating repetitive tasks, speeding up testing and documentation, and exploring solutions—but they still require human review, validation, and technical responsibility.

via: scmp

Scroll to Top