The News
Artificial intelligence models made by the biggest US tech companies are increasingly being used for military and defense purposes.
Earlier this month, Meta began allowing the use of its artificial intelligence model Llama by US government agencies “working on defense and national security” after it became apparent that Chinese researchers had used its code — which anyone can download and build on — to develop an AI model for military use.
Meanwhile, defense contractor Palantir uses Anthropic’s Claude AI to monitor government data, and OpenAI has appointed a former Palatir executive and a retired army general to its board.
SIGNALS
President-elect Trump may further expand military use of AI
US President-elect Donald Trump is widely expected to take a more deregulatory approach to AI than the Biden administration, and could expand its use for military purposes. Trump advisers have reportedly drafted an executive order that would create several “Manhattan Projects” to boost US military technology and strengthen the security of AI models, The Washington Post reported. Yet Trump’s stance on AI is perhaps unpredictable, an economist told Vox, in part because his biggest supporters are divided on it. Some, like tech investor Marc Andreessen, want to “slam the gas pedal” on development, while others, like Elon Musk, are more wary: “They are all united against ‘woke’ AI, but their positive agenda on how to handle AI’s real-world risks is less clear.”
Defense funding for AI could prove mutually beneficial
The Pentagon plans to invest billions of dollars in AI: Its 2025 budget includes more than $143 billion for research and development, with almost $2 billion for AI and machine learning. Former chairman of the joint chiefs of staff Ret. Gen. Mark Milley told Axios that AI brings “real significant fundamental changes” to defense, adding that a third of US military forces could be driven by AI by 2040. Government partnerships are also beneficial for tech companies, an AI company chief executive told tech-focused outlet Decrypt, because they provides stable revenue, the potential to have a say in future regulation, and “learning opportunities to understand real-world challenges unique to this sector.”
Open source AI models’ security divides experts
Meta’s Llama, which is partially open source — meaning anyone can download it to build on — has caused some controversy. One expert told IEEE Spectrum this approach “[fuels] a global AI arms race” by essentially opening up sophisticated AI tech to US adversaries, as in the case of the Chinese-developed military model. Open source code can be manipulated, an engineering expert wrote in The Conversation, a risk that grows in the case of military and defense uses “because the robustness of open source software is dependent on the public community.” However, others disagree, stressing open models may be easier to independently monitor for security threats.