Scott Wiener is a state senator in California. Q: Your AI bill would require makers of very large, powerful AI models to comply with strict regulations aimed at making them safer. But these companies are already limiting models because of worries they will be held legally liable. So why do we need the regulation? A: We don’t exactly know what the companies are doing. We know there are labs that take it very seriously and labs that may take it a bit less seriously. I’m not going to name names, but there was one large corporation where there was some concern that the safety protocols they put into place were actually not adequate. The reality is that you can’t just trust people to do what you want them to do; you need to have a standard. And that’s what this is about. Self-regulation doesn’t always work, and having clear, consistent safety standards for all labs developing these incredibly large, powerful models is a good thing. It’s good for the labs, too, because then they know what’s required of them. Q: The White House issued an AI executive order, but it’s anybody’s guess whether there will be any federal AI legislation. Is this another situation where California leads the way and becomes the de facto federal regulator? A: I wouldn’t characterize it as de facto federal regulations. I would characterize it as California protecting California residents. And, of course, because of the size of our economy and the role that we play in technology, we definitely set a standard for other states and hopefully, eventually for the federal government. But unfortunately, I don’t have enormous confidence that Congress will act and pass a strong AI safety law. You’re right, the federal government failed around net neutrality, so we stepped in. The federal government failed around data privacy, so we stepped in. And here, I wouldn’t call it a complete failure because I give President Biden enormous credit for spending a lot of energy and political capital to try to formulate safety standards. But an executive order has its limits. And unless Congress enacts strong safety standards into statute, then we can’t have complete confidence that these standards are binding. And that’s why we need to act at the state level. Liz Hafalia/The San Francisco Chronicle via Getty ImagesQ: One group that has sprouted up in the last year is the Effective Accelerationist movement [which believes in unrestricted tech development]. Some adherents are on your side when it comes to your housing initiatives. Do you talk with them? A: I know a lot of people in AI with a lot of different perspectives. This is a very carefully and meticulously crafted bill. It’s always going to be, as with every bill, a work in progress, and we continue to welcome constructive feedback. But we spent nearly a year meeting with people with all sorts of perspectives. We made quite a few changes to the bill before we introduced it in response to constructive feedback from folks in the AI space, including people who may tend a little bit more towards the accelerationist end of things. Since we rolled it out, I was bracing myself because I didn’t know exactly what to expect, especially representing the beating heart of AI innovation in San Francisco. I thought maybe people would start yelling at me, and that has not happened. I’ve gotten a lot of positive feedback. We’ve gotten some constructive critiques, which I love because it can make the bill better. And there are some people who I think are more on the accelerationist side of things, and I assumed they would just hate the bill, and they don’t. They may have some feedback on it, but I’ve been pleasantly surprised. Q: So you’re not getting disinvited to dinner parties because you’re regulating AI? A: More people seem to be mad at me for my speed governor bill requiring new cars to have hard speed limit caps. Q: So literal acceleration rather than effective acceleration? A: Exactly.
Read here for the rest of the conversation, including what the AI carrots are in Wiener's bill. → |
|