THE NEWS LONDON - The European Parliament voted to move forward on the A.I. Act, a sweeping set of proposed measures to mitigate some of the harms caused by the technology. In what would be one of the first major global rules on artificial intelligence, it would put limits on facial recognition technology and require makers of services like ChatGPT to disclose details about the data used to create them. American companies are leading the way in deploying generative AI, but the rest of the world will also have a say in what that will ultimately look like. And other artificial intelligence hubs have emerged like London, where I’m wrapping up my overseas trip. There are 50 generative AI startups here, according to a database maintained by the venture firm NFX, the second-highest tally for any city in the world. Reuters/Yves HermanREED’S VIEW From Silicon Valley, it can sometimes appear as if Europeans see only the negative aspects of AI and that regulations are an attempt to slow down American efforts to move fast (and potentially break things). In contrast, U.S. lawmakers seemed enamored with OpenAI founder Sam Altman when he testified on Capitol Hill last month. If there is one big takeaway from my conversations with people on this side of the pond, it’s that they are generally a lot more bullish on AI than one might expect. But it’s a more sober optimism than the San Francisco variety. Discussing AI here is less likely to lead to discussions that border on science fiction and fantasy. The people I spoke with, (executives at tech giants, startup founders, and advisers on tech regulation) were generally most excited about things like how AI could help cure disease and mitigate climate change, for instance. Most of them see Europe’s efforts to regulate AI as necessary, not because it’s a threat to humanity, but because it is too important. If there’s a consensus view, it’s that unregulated AI might go so off the rails that it would create a backlash against the technology, which would really hurt innovation. “We need the world to see all the good that can come from AI,” one executive told me. Nobody is under the illusion that this will be easy, however. Regulating technology is always difficult because by the time laws are passed, the technology has moved on. That is especially true of AI. The landscape seems to change week-to-week. The AI Act itself illustrates this problem. It’s been years in the making and didn’t initially include any provisions dealing with the new generative AI technologies like ChatGPT. It was mainly focused on “narrow AI” such as facial recognition technology, where the issues are clear and well understood. In a rush to keep the law up-to-date, regulators added provisions addressing the “foundation models” behind ChatGPT and other products. For Room for Disagreement and the rest of the story, read here. |