top of page

Google AI Arms Shift Proves Only Politics Can Protect Us

Writer's picture: Jasper GoddardJasper Goddard

Illustration by Will Allen/Europinion
Illustration by Will Allen/Europinion

Earlier this month, Alphabet, the owner of Google, altered its AI principles to remove a promise not to pursue the development of AI technologies for weaponry or surveillance purposes. Demis Hassabis, the CEO of Google DeepMind, and James Manyika, Senior Vice-President for Technology and Society, defended the change in a blog post, writing that "companies, governments, and organizations...should work together to create AI that protects people, promotes global growth, and supports national security." 


Discussions around regulating the AI sector faded since the re-election of Donald Trump. Much of the media spotlight around AI concerns has focused on LLMs such as ChatGPT and the potential for them to become conscious. But such claims are sensationalist and misplace worries about how AI might act independently as a malevolent actor, rather than how humans themselves might utilise AI for immoral purposes.


As seen with LLMs, AI provides a tool that can be used to offload the legwork of time-consuming tasks to technology, saving people time and resources that can be allocated elsewhere. In the knowledge sector, this has the potential to revolutionise workplace efficiency. However, in other areas, there is a reason humans should remain in control. When people’s lives are at risk, the use of AI must be carefully managed, as seen with the precautions being taken with autonomous vehicles despite their relative safety. But when the development of AI is being directed towards technologies whose purpose is to cause harm, no amount of safety precautions will ultimately prevent human suffering.


There was hope, with self-imposed regulations such as that of Google’s, Big Tech would focus its attention on the knowledge and attention spheres and stay out of AI weaponry. But this development proves businesses cannot be trusted to regulate themselves. Government and Intergovernmental Organisations (IGOs) must establish common and robust legislation that addresses autonomous weaponry and AI surveillance, safeguarding communities in conflict zones and preventing the unchecked spread and misuse of such systems. 


The EU has at least attempted to tackle AI regulation through the EU AI Act, but it explicitly excludes any reference to the development of AI for military, defence, or national security purposes. If the EU is not prepared to take this on, the hope of inspiring collaboration between nations on the issue would appear negligible. The Silicon Valley mantra of “move fast and break things,” may be fine when applied to social media (although it certainly has its issues there as well), but in the domains of military and surveillance, the dangers become existential. The margin for error is minimal and ‘success’ constitutes the creation of tools to inflict suffering and misery.


There are recent case studies that illustrate these concerns. In April 2024, an investigation by +972 Magazine broke a story about “Lavender,” an AI weapons system used by the IDF in Gaza, to bomb suspected Hamas operatives. The report revealed that the outputs of Lavender were treated “as if it were a human decision…with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.” Lavender makes errors approximately 10% of the time, leading to the loss of thousands of Palestinian lives, mainly women and children, who had no part in the conflict. “Lavender” was also used to target suspected Hamas operatives while they were in their homes, indiscriminately taking out their family members as collateral. Given how many are using LLMs to write whole essays, cover letters, etc. is it really surprising that AI weapons systems might be assigned so little oversight? 


And then we have the matter of surveillance. In a society in which we are surrounded by smartphones, laptops and search engines produced by Big Tech companies, the prospect of these same companies developing AI surveillance should rightly evoke fears of 1984-style dystopias. All it takes is one or two bad actors at the top of these businesses or government, for these technologies to be used nefariously. 


Illustration by Will Allen/Europinion
Illustration by Will Allen/Europinion

Which brings us to the crux of the problem. Much of the discussion around the ethics of AI treats it as an independent actor. But this gives AI too much autonomy. We, as humans, develop this technology and shape what it can and will do. We can decide whether to pursue the development of potentially harmful technologies or not.


Google reneging on their commitment to not pursue AI arms and surveillance development must serve as a wake up call to policymakers around the world. Without swift, effective action, it will be too late to halt the proliferation of more systems like Lavender. War is devastating enough as it is and in a time of increased geopolitical tensions, an AI arms race must be avoided.


Comments


bottom of page