The News
Autonomous weapons are increasingly being used in combat.
In the last year, Ukraine’s military employed artificial intelligence-driven drones to strike Russian targets, the US worked with AI systems to identify targets in Syria and Yemen, and Israel’s forces used AI to target suspected Palestinian militants.
The US has more than 800 AI-related defense projects in the pipeline, and has appointed a senior official that oversees “algorithmic warfare.” Diplomats and manufacturers alike argue the technology has reached its “Oppenheimer moment,” referring to the development of the atomic bomb in World War II.
SIGNALS
Giving too much power to AI could backfire
The comparison between AI weapons and the development of the atomic bomb has become a “refrain” in the industry, The Guardian noted, adding that the reference can be interpreted as hailing the beginning of a peaceful American-led hegemony, or as an ominous prediction of destruction. \While the tech isn’t advanced enough to be involved in higher strategic decision-making, that may soon change, and in turn, inadvertently or overtly escalate conflicts, and even cause “flash wars” analogous to flash market crashes, The Atlantic argued. Key concerns include the ability of the tech to distinguish military from civilian targets, and who bears responsibility if an AI misfires.
Warfare is in an ‘uncharted territory’ with ‘killer robots’
AI has the potential to revolutionize warfare like the invention of the radio or the machine gun did, Bloomberg reported. The era of “killer robots” is already here, as the “widespread availability of off-the-shelf devices, easy-to-design software, powerful automation algorithms and specialized artificial intelligence microchips has pushed a deadly innovation race into uncharted territory,” The New York Times noted. Most people in the space agree regulation is needed, but the speed the tech develops makes it difficult for lawmakers to keep up. Instead, an “adaptive and evolving” approach could help, as well as accounting for the many different sectors involved in building such weapons, the United Nations University think tank argued.
Big Tech is becoming more amenable to AI warfare
For years, younger generations with the skills to build AI have shown more interest in consumer culture than in defense, two execs at tech company Palantir argued in The Washington Post. In 2018, Google dropped the American military effort Project Maven in response to employee pushback. But that’s changing, The Guardian reported: This year, Google signed a $1.2 billion deal with the Israeli government to help it develop cloud computing and AI, and when its employees pushed back again, the company didn’t cave. And startup OpenAI, which had originally banned the use of its tech for military purposes, quietly walked back that decision earlier this year.