• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


The brewing storm over California’s AI bill

Updated Aug 9, 2024, 12:19pm EDT
techNorth America
Scott Wiener
PostEmailWhatsapp
Title icon

The Scene

Silicon Valley’s presidential politics have been in the national spotlight, but the debate over California’s AI bill, known as SB1047, is actually the talk of the town.

In recent weeks, everyone from renowned AI researchers to venture capitalists to computer science professors are weighing in on whether the proposed law, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, should pass.

Supporters of the bill, sponsored by California State Sen. Scott Wiener, argue it is critical to put guardrails on now, before it is too late and something catastrophic occurs. Opponents believe some of its provisions are so draconian that it will stifle AI innovation altogether.

AD

“Pause AI … legislation,” tweeted Meta chief AI scientist Yann LeCun, poking fun at a popular letter from The Future of Life Institute penned last year that called for a pause in large AI experiments.

The bill, which passed the state’s Senate Judiciary Committee and is now before Appropriations, would require makers of large AI models to certify, under penalty of perjury, that they are reasonably safe, and hold them liable for potential damage caused by use of those systems. It would also require large AI models to include a kill switch, so that they can be instantly turned off if a problem arises.

Many AI researchers say it’s impossible to be sure someone won’t abuse the technology, and equally unrealistic to require a kill switch.

“This well-meaning piece of legislation will have significant unintended consequences, not just for California, but for the entire country,” wrote Stanford professor Fei-Fei Li, sometimes called “the godmother of AI,” in an editorial published in Fortune earlier this week.

AD

In that piece, Li offered to work with Wiener on revising the bill. When Semafor interviewed the state senator in February, he said he had vetted the plan with stakeholders on every side of the issue.

“This is a very carefully and meticulously crafted bill,” he said in the interview. “We made quite a few changes to the bill before we introduced it in response to constructive feedback from folks in the AI space. I thought maybe people would start yelling at me, and that has not happened. I’ve gotten a lot of positive feedback.”

But since then, a backlash has slowly gathered steam, reaching a crescendo this week as more tech industry players pile on criticism and supporters vehemently defend it. Wiener has said the opposition is a small, vocal minority.

AD

Title icon

Know More

One of the backers of SB1047, The Center for AI Safety, has pointed to public opinion polls showing that the majority of Californians back the legislation. The Artificial Intelligence Policy Institute released a similar poll.

“It is unsurprising that tech companies and VCs want to avoid the inconvenience of having their products regulated; but we should not confuse their interests with the public’s interest,” wrote Anthony Aguirre, executive director for The Future of Life Institute, a prominent voice in AI safety.

Martin Casado, the leading AI investor at venture capital firm Andreessen Horowitz, pushed back on the notion that the bill is popular. ”Scott_Wiener continues to falsely claim narrow opposition to SB 1047. When in reality there is massive public outcry across research, academic, public and private business and finance,” he wrote on X, linking to several prominent opponents of the bill.

Dan Hendrycks, CEO of the Center for AI Safety, is an advisor to Elon Musk’s xAI. Hendrycks told Bloomberg that Musk “is definitely pro-regulation.”

And while OpenAI CEO Sam Altman has not weighed in on SB1047, he did sign onto an open letter written by Hendrycks’ organization. Wiener told Semafor in February that he discussed the bill with OpenAI ahead of its drafting.


Title icon

Reed’s view

Even if SB1047 goes nowhere, Wiener has accomplished something significant: He’s forced a massive number of leading voices in AI to come out publicly and take a stand on what regulation of the technology should look like, and if there should be any rules at all.

Take Anthropic, for instance, a company founded by a team of AI researchers who left OpenAI in part because they didn’t think Sam Altman’s firm prioritized safety. Anthropic has proposed changes to SB1047 that have put it in direct conflict with the AI safety community that once revered it.

“It is incongruous to say that there is a 10-25% chance that advanced AI will cause an extinction-level catastrophe, but then argue that the culpable AI companies should only receive fines after a catastrophe occurs,” Future of Life President Max Tegmark said in a statement. “By lobbying to gut SB1047, Anthropic is acting like any other AI corporation.”

What is so remarkable about that is Anthropic’s initial funding came from Skype co-founder Jann Tallinn, who is also the founder of the Future of Life Institute.

The real issue in SB1047, though, isn’t any of the specific provisions. It is that it’s too vague. And that allows both sides to make good arguments about why it should pass or fail.

For instance, proponents can say that it isn’t outlandish to ask makers of powerful technology to make sure it’s reasonably safe. What’s the harm in that?

But critics of the bill argue the opposite, calling the restrictions draconian. It’s possible that a well-meaning chief technology officer of an AI company could, in theory, end up in prison for having a different view of “reasonably safe.” (Weiner and proponents of the bill argue that opponents of the bill have used this as a scare tactic, and that anyone acting in good faith would be safe from criminal prosecution).

That’s unlikely to happen, so long as the attorney general and judges responsible for how the law is applied are themselves reasonable. But that assumption won’t be good enough for companies spending billions of dollars to train future AI models. They’ll move their AI research or even their offices out of California before they risk it.

“The scary scenario here is that something totally unintended happens from this legislation that neither the authors or the community wants,” said Joel Burke, senior public policy and government relations analyst at Mozilla, in an interview with Semafor. Mozilla, maker of Firefox, has also come out against the bill.

The bill also focuses a lot on size. Only models that use a certain amount of compute power and cost a certain amount to train are subject to the bill.

That ignores the possibility that models under that criteria could become more powerful and more ubiquitous as AI infrastructure and techniques evolve. A lot of harm could happen before any model reaches the thresholds outlined in SB1047.

Title icon

Room for Disagreement

Wiener tweeted Thursday that his plan isoverwhelmingly popular among Californians,” including most tech workers.Contrary to what the loudest voices say, SB 1047 began as a grassroots effort with help from folks in SF’s hacker houses,” he said.

Wiener is adamant that the bill threads the needle, simultaneously allowing innovation and addressing safety concerns. “I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks,” he said in a press release last month.

In a letter addressed to California Governor Gavin Newsom and other leaders Wednesday, AI researchers Yoshua Bengio, Geoffrey Hinton and Stuart Russell, along with law professor Lawrence Lessig argued that the catastrophic risks of AI are imminent and that SB1047 is the “bare minimum” of regulations needed. “As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,” they wrote.

AD
AD