Google employees urge CEO to reject 'inhumane' classified military AI use

Published on 28/04/2026 - 11:34 GMT+2 More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial...
Published on 28/04/2026 - 11:34 GMT+2
More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.
"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."
The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.
It has been signed openly by more than 20 directors, senior directors and vice presidents.
"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.
"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."
The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.
Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.
Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.
The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.
The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.
"We believe that Google should not be in the business of war," read the letter.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."




