The News
OpenAI is working with the Pentagon on software projects, including ones related to cybersecurity, the company said Tuesday, in a dramatic change from its previous ban on providing its artificial intelligence technology to militaries.
The ChatGPT creator is also in discussions with the U.S. government about developing tools to reduce veteran suicides, Anna Makanju, the company’s vice president of global affairs, said at the World Economic Forum — but added that it will retain its ban on developing weapons.
Last week, OpenAI removed language in its usage policy that would ban its AI from being used in “military and warfare” applications, sparking alarm among AI safety advocates.
SIGNALS
Silicon Valley has changed its mind about collaborating with the Pentagon
Silicon Valley has softened its stance on collaborating with the U.S. military in recent years. In 2018, thousands of Google employees protested a Pentagon project, fearing technology they developed could be used for lethal purposes. That proved to be the high water mark of Silicon Valley opposition to the Department of Defense, with Google since earning hundreds of millions from its defense contracts. The Pentagon has made a concerted effort in recent years to win over Silicon Valley startups in order to develop new weapons technology and integrate advanced tools into the department’s operations. U.S.-China tensions and Russia’s war in Ukraine have also served to dispel many of the qualms entrepreneurs once had about military collaboration. “What’s emerged lately is a kind of techno-patriotism in Silicon Valley,” wrote Semafor’s technology editor Reed Albergotti.
AI may remake the military, but could come with profound risks
Defense experts have been bullish about the impact AI will have on the military. Former Google CEO Eric Schmidt, now a prominent defense industry figure, has compared the arrival of AI to the advent of nuclear weapons, Wired reported. “Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology — nuclear weapons — that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful,” Schmidt said. But advocacy groups have warned that integrating AI into warfare could come with profound risks given AI’s tendency to “hallucinate” — make up fake information and pass it off as real — which could have far higher stakes if AI-powered systems are integrated into command and control systems. The Arms Control Association has warned that the rush to “exploit emerging technologies for military use has accelerated at a much faster pace than efforts to assess the dangers they pose.”
OpenAI rules are unclear about scope of possible military deals
Although OpenAI has ruled out developing weapons, its new policy would likely allow it to provide AI software to the Department of Defense for uses such as helping analysts interpret data or write code, The Information reported. But as the war in Ukraine has shown, the divide between data crunching and warfare may not be as clear-cut as OpenAI would like. Ukraine has developed and imported software to analyze large data, which has allowed its artillery operators to be rapidly notified of Russian targets in the area and dramatically speed up the pace at which they can fire. Meanwhile, The Information warned that the change in policy could be enough to reignite the debate over AI safety at OpenAI that contributed to Sam Altman’s brief firing as CEO.