Google Modifies Its AI Principles and Lifts Ban on Developing Weapons with Artificial Intelligence

In a recent update to its artificial intelligence (AI) principles, Google has removed one of the most significant restrictions from its previous policy: the explicit ban on developing weapons or technologies intended to cause harm to people. According to an analysis by Artificial Intelligence News, the change entails the removal of a key section from the document, raising questions about the company’s ethical stance on the use of AI in military and surveillance domains.

The modification of Google’s policy aligns with a trend seen in the tech industry. In 2023, OpenAI also adjusted its stance by lifting certain restrictions in its regulations before announcing a partnership with the Pentagon. Now, Google has updated its framework of principles for AI by eliminating a specific category that explicitly detailed the applications that would not be developed under any circumstances.


Google removes the list of prohibited AI applications

Until recently, Google’s AI policy included a section titled “AI Applications We Will Not Address”, where the company explicitly detailed the uses it would avoid in its technological developments. This list included:

  • Technologies that could cause significant harm, unless the benefits substantially outweighed the risks.
  • The development of weapons or systems whose primary purpose is to cause harm to people.
  • Mass surveillance systems that violate international norms.
  • Applications that oppose human rights and international law.

With the recent update, this section has completely vanished. Instead, the new version of Google’s principles mentions only that the company will work to “mitigate undesirable or harmful outcomes” and “align with accepted principles of international law and human rights”, but without establishing explicit restrictions on the development of AI for military applications.


Why has Google made this change?

The adjustment in Google’s AI policy occurs in a context of growing interest in government and military contracts within the tech sector. Although the company previously avoided engaging in projects with the U.S. Department of Defense, it has increased its participation in technology infrastructure bids for defense and security agencies in recent years.

Project Maven and Google’s precedent

In 2017, Google collaborated with the Pentagon on the Maven Project, an initiative that used AI to process images captured by drones and assist in identifying targets in conflict zones. The revelation of this partnership provoked a massive protest from employees, who argued that Google’s technology was being used for military purposes. As a result, the company chose not to renew the contract and reaffirmed its commitment not to develop AI for weapons.

However, the change in Google’s AI principles suggests a potential openness to such collaborations in the future. In recent years, the company has worked with governments on projects involving cloud infrastructure, data analysis, and cybersecurity, which may indicate a more flexible strategy regarding its involvement in defense.


Google’s response and the ethics of AI debate

When asked about this change in its policy, Google responded with a blog post co-authored by James Manyika, Senior Vice President, and Demis Hassabis, CEO of Google DeepMind. In the document, the company reaffirms its commitment to the responsible development of AI, but avoids directly mentioning the removal of the restriction on weaponry.

“We recognize the rapid pace at which the underlying technology and debate surrounding the advancement, implementation, and use of AI will continue to evolve. We will keep adapting and refining our approach as we all learn over time.” – Google AI

While the statement emphasizes the need for responsible AI management, it does not address the rationale behind the removal of the ban on military applications. This has raised concerns about whether the company is paving the way for new agreements with the Pentagon or if it has simply opted for a less restrictive stance to avoid conflicts in future negotiations.


Implications of the change in Google’s policy

The adjustment in Google’s AI principles may have multiple short- and long-term consequences:

1. Greater flexibility for military contracts

The removal of the restriction against weapon development suggests that Google could consider future agreements with the Pentagon or other governments on AI projects with military applications. While this does not necessarily imply the development of lethal weapons, it does open the door for collaborations in data analysis, cybersecurity, and military automation.

2. Potential employee backlash

The history of the Maven Project demonstrated that Google employees are critical of the company’s participation in military initiatives. If Google moves forward with new contracts with the Department of Defense, it is likely to face internal protests and debates over the ethics of AI once again.

3. Regulation and government oversight

As AI becomes a strategic component in defense and national security, it is likely that governments will begin to impose stricter regulations on its use. In both the U.S. and the European Union, discussions are already underway regarding standards to control the development and application of AI in military environments, which could affect Google and other tech companies.

4. Impact on sector competition

If Google, which previously refused to collaborate with the military sector, now modifies its principles, other tech companies like Microsoft, Amazon, and OpenAI may follow suit. This would open a new phase of competition in the AI market for defense and security, with multimillion-dollar contracts at stake.


Conclusion: A strategic change with long-term consequences

The removal of the restriction on the development of AI for weaponry in Google’s policy marks a strategic shift that could redefine its role in the defense sector. While the company continues to emphasize respect for human rights and international law, the omission of an explicit prohibition has raised concerns within the tech community and among AI ethics experts.

At a time when artificial intelligence is becoming a key tool for global security, this decision by Google could influence the stance of other major tech firms and the evolution of AI in the military sphere. Attention now turns to whether the company will announce new contracts with the Pentagon and how its employees will respond to this change in direction.

References: Google AI and archived version of Google’s old AI principles

Scroll to Top