• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


AI safety bill driven by fears Trump would reverse efforts to rein in technology

Jun 5, 2024, 1:20pm EDT
tech
Brendan McDermid/Reuters
PostEmailWhatsapp
Title icon

The Scoop

California’s latest AI safety bill is driven by fears that another Donald Trump presidency would overturn federal efforts to rein in the technology — and it’s backed by people connected to a potential Trump ally: Elon Musk.

Dan Hendrycks, director of the Center for AI Safety and also the safety advisor at Musk startup xAI, is one of the chief backers behind the sweeping California plan, and told Semafor that mitigating risks posed by AI is a critical bipartisan issue.

“If [Trump] does take a very strong anti-AI safety stance, it will obviously make things difficult and it maybe makes sense not to ally with [him],” said Hendrycks, speaking in his role at CAIS, which is part of the effective altruism movement and has received millions in donations from billionaire Facebook co-founder and Asana CEO Dustin Moskovitz.

AD

Trump’s campaign declined to comment.

Musk has warned multiple times that AI poses a threat to humanity and has donated millions to the Future of Life Institute, another EA-friendly group that supports the state plan. He and Trump have also discussed an advisory role for Musk if the Republican presidential candidate wins another term, and met with him recently about a voter-fraud prevention plan, the Wall Street Journal reported.

The Silicon Valley-connected EA movement focuses on the potential existential risk posed by artificial intelligence, and is beginning to make forays into politics. CAIS and other related groups worry that Trump would undo the work of his predecessor as he did the last time he was in the White House. In 2023, President Joe Biden signed an executive order requiring companies building powerful foundational models to conduct safety audits and share test results with the government, which would help mitigate national security risks as AI continues to advance.

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, proposed by state Senator Scott Wiener, is even tougher and focuses on tackling the most hazardous issues. The legislation would require developers to swear under oath that their models aren’t capable of carrying out nuclear, chemical, biological or cyber attacks on critical infrastructure that result in mass casualties or at least $500 million worth of damages. They must also have the ability to shut down their model in case it wreaks havoc.

AD

“Executive orders do not have the same force of law as a statute does. Any executive order can be changed by any president. So having an actual law in place is important and makes sense,” Wiener told Semafor. “We know Donald Trump and his team [couldn’t] care less about protecting the public [from AI], so we think it’s important for California to take steps to promote innovation and for safe and responsible deployment of extremely large models.”

Many players in the tech industry, however, are concerned that the bill is too draconian and will hamper innovation in the best state to build AI. Wiener is now trying to strike a balance, and is currently in talks to adjust the legislation as the proposal heads to the Assembly.

Title icon

Know More

The most contentious issue is the idea of “derivative models”. Not only do developers need to ensure that their own systems are safe, they also need to promise that other versions of their software will be as well, even if their technology has been modified by other people.

AD

Top venture capital firms like Andreessen Horowitz and others have been in talks with Wiener to push back against this policy. Companies will not release their models publicly if they may potentially be criminally liable if their technology is repurposed for nefarious applications, they argued.

Many AI startups prefer to build on top of free open models released by Meta and Mistral instead of paying for closed systems like OpenAI’s ChatGPT. “We think it’s important to make sure open source AI remains an accelerant for US leadership in AI, and we have ongoing concerns that this bill harms that,” Hemant Taneja, CEO and Managing Director at General Catalyst, told Semafor in a statement.

Industry trade groups like the Chamber of Commerce, Silicon Valley Leadership Group, and others representing big tech companies like Amazon and Google have also opposed the bill.

Dylan Hoffman, TechNet’s Executive Director for California and the Southwest, told Semafor that the bill is difficult to comply with considering it’s tricky to foresee AI’s harms and predict the ways it might be misused. “Companies prefer federal legislation for uniformity. They are nervous about AI standards, and the worst case scenario is when every state has different standards and companies have to decide which one to comply with.”

Title icon

Katyanna’s view

Wiener’s plan is particularly interesting and unique; it tackles the most dangerous AI safety risks rather than focusing on specific issues like algorithmic bias or deepfakes in other bills. But it’s difficult to persuade companies to agree with these measures when these scenarios are still hypothetical; it’s not clear if or when the technology will become powerful enough to endanger human lives at scale.

Although the bill passed the Senate last month, there are still many hurdles ahead for Wiener. It still needs to be approved by the Assembly and Governor Gavin Newsom, who has warned of overregulating the technology and fears driving away businesses from California.

Many people I spoke with doubted that the legislation would be approved and signed into law. If Wiener’s bill fails to pass and Biden’s executive order gets scrapped by Trump, there will be a gap in regulation targeting long-term risks and it’s unlikely that companies can be trusted to govern themselves.

Although AI safety hasn’t been a strong focus for Trump’s campaign, the former president has previously said he thought the technology was “maybe the most dangerous thing out there.” Given that he’s cozying back up to Musk again, maybe the tech billionaire can persuade him to take AI safety risk seriously and push for federal regulation, and help strike a middle ground between the AI doomer and boomer camps.

Title icon

Room for Disagreement

There are other reasons that California’s legislation is tricky even if developers are concerned with the most dangerous AI safety risks. It’s difficult to predict and test the capabilities of these systems, and edge cases are often discovered only after the technology has been unleashed publicly.

Title icon

Notable

  • Former and current OpenAI employees have signed an open letter urging companies to protect whistleblowers sounding the alarm on AI safety practices.



AD
AD