
The News
Tech leaders are warning the US against pursuing a “Manhattan Project”-style push for AI systems with superhuman intelligence.
Congressional policymakers are considering an aggressive push to reach “superintelligence” ahead of rivals, modeled on the World War II dash to create an atomic bomb, according to a congressional commission proposal from Nov. 2024.
U.S. Secretary of Energy Chris Wright also referred to the global AI race as the ”new Manhattan Project″ earlier this month.
But in a policy paper published last week, former Google CEO Eric Schmidt and two AI industry bosses argued that countries should be wary of racing for superintelligent AI, just as they don’t seek monopolies over nuclear weapons, because the effort could trigger a preemptive strike from China, for example.
The US is currently in an AI standoff similar to the principle of mutually assured destruction, but the congressional commission’s plan “assumes that rivals will acquiesce to an enduring imbalance or omnicide,” the trio wrote.
SIGNALS
How similar are the AI and nuclear arms races?
Schmidt and his co-authors argued that “Mutual Assured AI Malfunction” will prevent any one country from creating a “superintelligent” AI, because if they try to do so their rivals will seek to disable their program through cyberattacks or sabotage. Drawing on the theory of mutually assured destruction in the nuclear weapons realm, they argue that a similar stalemate can be achieved in AI, preventing the emergence of destabilizing AI systems. The theory draws on “optimistic assumptions,” several experts at RAND countered, noting that developments in cloud computing means there would unlikely be centralized physical locations that can be easily knocked out. Even with nuclear weapons development, the authors noted, adversaries struggled to uncover how far along their rivals were — and this will only be harder with AI software.
AI companies double down on race with China
In letters to the Trump administration, leading software companies called for cutting red tape, investing in AI facilities, and integrating advanced AI models into federal departments to ensure the US stays ahead of China. OpenAI said Washington should ensure domestic AI companies can train models on copyrighted data, warning that “if the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.” Anthropic predicted that the next few years will see AI systems with similar reasoning skills to Nobel Prize winners, and called for the US government to tighten export controls to prevent adversaries from accessing the hardware needed to train AI models.
AI use could be eroding critical thinking skills
Even as the US administration has vowed to race ahead with AI, researchers have found that using AI tools for complex work can reduce critical thinking skills. Across the board, reading, math, and science skills have dropped since 2012. This reflects “a broader erosion in human capacity for mental focus and application,” the Financial Times wrote, likely attributable to the rise of digital media and the overwhelming amount of online information people are exposed to. AI could compound the problem: Teachers are particularly concerned about students using AI at the expense of their learning. “This is a gigantic public experiment that no one has asked for,” one educator told The Wall Street Journal.