Google AI Ethics Memo Makes It Clear Military Work Will Continue

As tipped earlier this week, Google has unveiled an artificial intelligence ethics memo following outcry over its involvement in Project Maven. In it, Google says that its principles "are not theoretical concepts," but rather "concrete standards" that'll "actively govern" its future AI work. Among other things, Google specifically details things it will not pursue as part of its AI efforts.

Advertisement

The AI principles were introduced by Google CEO Sundar Pichai today in a long statement, where he explained:

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.

The company has detailed a total of seven principles guiding its ongoing AI work, which it vows will play a crucial role in its R&D and production developments, even having an "impact on our business decisions," according to Pichai. The seven principles are:

1. Be socially beneficial

2. Avoid creating or reinforcing unfair bias

3. Be built and test for safety

4. Be accountable to people

5. Incorporate privacy design principles

6. Uphold high standards of scientific excellence

7. Be made available for uses that accord with these principles

Advertisement

The company breaks down each principle, explaining what it means by, for example, being socially beneficial. Just as relevant, though, are what Google says it won't be pursuing as part of its artificial intelligence work:

1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google explicitly states that it will continue to work with both the government and the military "in many other areas" not related to AI for weapons. The company points toward a variety of categories, including military training and cybersecurity, as areas where it will work with the government/military. Pichai says Google will be "actively" looking for these collaborations.

Advertisement

This comes in direct response to the company's work with the Department of Defense's Project Maven. Google, under its current contract, is providing AI technology to analyze drone footage, a controversial business arrangement that resulted in severe backlash among both the public and Google's own employees.

A recent report claimed that Google won't be renewing its Project Maven contract next year due to the outcry, though leaked emails reportedly revealed that Google's higher ups were eager for such contracts. The company's newly published AI principles, below, make it clear that Google views working with the government and militaries as vital and that it will continue to do so, just with the promise that weapons won't be involved.

SOURCE: Google

Recommended

Advertisement