Microsoft bans DeepSeek: Chinese AI prohibited in its corporate environment for security reasons

The American company imposes an internal ban on the artificial intelligence application developed in China, citing data privacy risks and structural censorship in the model

The technological war between the United States and China has taken another turn. Microsoft has prohibited the internal use of DeepSeek, one of the most promising new artificial intelligence applications developed in China, citing security reasons and lack of transparency. The announcement was confirmed by the company’s president, Brad Smith, during a hearing in the U.S. Senate, amid growing concerns over technological sovereignty and data protection against foreign powers.

A Ban with Geopolitical Implications

Microsoft’s decision is not merely a technical or administrative measure: it sends a powerful message to the global AI ecosystem. DeepSeek, whose language model has impressed with its power and efficiency, has been removed from the Microsoft Store, and its use has been banned for all company employees in corporate environments.

The main argument put forth by Microsoft is the location of the servers where DeepSeek processes and stores data, all situated in Chinese territory. According to Smith, this could facilitate access to confidential information by intelligence agencies of the Chinese government, an unacceptable risk for a company that manages cloud services, operating systems, and AI models used by governments and large corporations.

Algorithmic Censorship and Narrative Control

Beyond the risk of espionage, Microsoft also questions the lack of neutrality of the DeepSeek model, emphasizing that it systematically avoids certain sensitive topics for the Chinese Communist Party. This behavior, interpreted as a form of algorithmic censorship, reinforces distrust toward technologies that, although sophisticated, may be ideologically aligned with interests contrary to democratic values.

Contradictions in the Cloud: Azure Yes, but Modified

Paradoxically, Microsoft continues to allow the use of the DeepSeek R1 model on its Azure cloud, albeit with significant caveats. In this case, the model is distributed as open-source software, allowing users to run it on local instances without needing to connect to Chinese servers.

However, before permitting this, Microsoft modified the base model. As Smith explained, the company’s engineers intervened in the code to eliminate potential risks, from undesirable behaviors to technical backdoors, and subjected the system to rigorous security audits.

This dual approach—banning the app but allowing the adjusted model—reflects the tension between technological openness and sovereign control, an increasingly thin line in the global development of AI.

A Precedent for Other Companies

The ban imposed by Microsoft could mark a turning point for other tech companies and Western government agencies. Growing skepticism toward tools developed in China is not new, but now it extends to the heart of generative AI, one of the most strategic fields for economy, defense, and digital dominance.

DeepSeek is not the first casualty of this distrust but certainly one of the most notable. Its market entry was seen as a demonstration of the rapid advancement of Chinese AI, and its capabilities have been compared to those of OpenAI or Anthropic. Microsoft’s ban, however, brings back to the forefront the central dilemma of the 21st century: Can there be artificial intelligence without borders in a world with increasingly defined blocs?

Source: Gizmodo

Scroll to Top