• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Anthropic looks to chip and energy strength to secure American AI dominance

Mar 6, 2025, 6:00am EST
techNorth America
Anthropic CEO and co-founder Dario Amodei.
Yves Herman/Reuters
PostEmailWhatsapp
Title icon

The News

Claude creator Anthropic is calling on the White House to tighten chip export controls and oversee national security-related model testing as the AI race between the US and China heats up.

The AI startup is among the first tech companies to publish its proposal in response to the administration’s request for input on creating an action plan garnering the US as the undisputed authority in artificial intelligence. The company offered six recommendations to the US Office of Science and Technology Policy that it released today. Anthropic’s top suggestions boil down to: increasing security on domestic and foreign AI models; keeping chips out of adversaries’ hands by restricting semiconductor exports; and on energy infrastructure, as the Trump saying goes, “build, baby, build.”

Anthropic CEO Dario Amodei expects powerful AI systems will reach consumers as early as next year — the kind of systems that can autonomously control physical equipment, reason over extended periods of time, and match the intelligence of the foremost experts in math and science fields. But China’s recent breakthrough with DeepSeek indicates there’s still room for the country to replace the US as the world’s AI leader.

AD

“We believe the United States must take decisive action to maintain technological leadership,” Anthropic said.

Title icon

Know More

In a stance that runs askew from that of Microsoft and Nvidia, Anthropic supports stronger chip export controls. It didn’t cite the so-called AI Diffusion Rule passed by the Biden Administration that caps semiconductor exports in roughly 150 countries, but Anthropic did suggest similar actions, such as intergovernmental agreements for the biggest buyers.

Just last week, Microsoft published a blog post advocating against the AI Diffusion Rule, saying national security provisions are important but the action “goes beyond what’s needed.” In an interview last month, Amodei said the rule is of “less immediate concern to me because it’s farther up in the supply chain.”

Anthropic also wants the government to cut the red tape for tech and energy companies looking to build an enormous amount of power infrastructure — an ambitious additional 50 gigawatts of power by 2027, enough to run a large city. That infrastructure, as well as AI labs, needs “next-generation” security, it said.

AD

Energy demands of the data centers powering AI have brought the nation’s aging power grid into focus. The Trump administration has stated it will look to increase energy production and reduce costs. The president issued an executive order in February establishing the National Energy Dominance Council, which will provide guidance on improving infrastructure and removing regulatory barriers.

Since Trump took office for the second time, he has also announced AI-specific infrastructure projects including a $500 billion partnership in Stargate and $100 billion that Taiwanese semiconductor giant TSMC will spend on US chip manufacturing facilities.

In line with Elon Musk’s efforts to modernize government processes through DOGE, Anthropic also suggested federal agencies should integrate AI into their workflows to increase productivity. Similarly, it should update how it collects financial data in preparation for big economic changes.

AD

Government agencies should also test models for national security risks, Anthropic said. That includes building specific testing locations and employing experts to find weaknesses in both foreign and domestic systems. Such a rule wouldn’t change much for Anthropic. Last year, the company voluntarily subjected Claude 3.5 Sonnet to government testing by the US and UK — something OpenAI has also agreed to do for its models.

AD
AD