Google has quietly removed a key commitment from its AI ethics guidelines. The tech giant no longer pledges to avoid AI projects that could cause “overall harm,” including weapons and surveillance. The change comes as Donald Trump begins his second term as U.S. president.
The pledge, originally added in 2018, followed employee protests against Google’s work with the Pentagon on drone technology. At the time, workers invoked Google’s former motto: “Don’t be evil.” That phrase was also removed from the company’s code of conduct years ago.
James Manyika, Google’s senior VP for research, and DeepMind CEO Demis Hassabis defended the decision in a blog post. They argued that AI development should align with democratic values such as freedom and human rights. They also emphasized that governments and tech companies must work together to ensure AI benefits national security.
Until this week, Google had promised not to pursue AI applications that could violate human rights or international law. The company’s AI ethics page has now been updated. It states that Google will maintain “human oversight” and work to prevent unfair bias.
Former Google AI ethics leader Margaret Mitchell criticized the move. Speaking to Bloomberg, she warned that the change could lead Google to develop AI capable of killing people.
Human rights advocates also condemned the update. Sarah Leah Whitson of Democracy for the Arab World Now called Google a “corporate war machine.” Critics noted that the company has aligned itself with Trump’s administration. Google previously donated $1 million to his inauguration and sent CEO Sundar Pichai to the event.
Tech firms have also recently scaled back commitments to diversity, equity, and inclusion (DEI) initiatives. Trump has targeted such programs in government and corporate sectors.
Google’s internal worker union also voiced concerns. Parul Koul, president of the Alphabet Workers Union, said employees have long opposed Google’s involvement in military projects.
The company’s shift in AI ethics policy has sparked fears about the future of responsible AI. Critics worry that Google’s new stance could accelerate AI’s use in warfare and surveillance.