• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Schumer push marks the summer of AI regulation

Jun 21, 2023, 1:53pm EDT
techbusinesspolitics
Reuters/Kevin Lamarque
PostEmailWhatsapp
Title icon

The News

U.S. Senate Majority Leader Chuck Schumer laid out a sweeping proposal Wednesday for regulating artificial intelligence to protect America’s national security and its workforce, while also ensuring rules encourage instead of stifle innovation.

In a speech calling for the accountability of companies developing AI as part of the SAFE Innovation framework, Schumer said the speed with which the technology is advancing also requires discarding the usual ways of writing legislation. Instead of hearings, he is pushing for Congress to convene a series of “AI insight forums” with tech leaders, labor representatives, and critics.

“AI is unlike anything Congress has dealt with before,” he said at the Center for Strategic & International Studies, which invited an audience with a diverse group of views on the technology, from critics at Open Philanthropy to leading developers at OpenAI, Anthropic, and Hugging Face. “In many ways, we are starting from scratch.”

AD

It’s the latest sign that regulating artificial intelligence is coming to a head. The topic also came up in a meeting with President Joe Biden during his visit to the Bay Area on Tuesday, according to Rob Reich, a Stanford professor who attended the meeting. Last week, the European Parliament moved ahead with a draft version of the AI Act, which has been years in the making.

Title icon

Reed’s view

A year ago, most regulators, politicians and consumers had never used or heard of generative AI, the technology that underpins large language models like ChatGPT and image-generation service DALL-E.

That fact alone makes the regulatory debate different than those we’ve had in the past. There are real concerns about how generative AI could lead to harm, from an increase in the spread of misinformation to its use in scams. But it’s so new that those harms are, at this point, mostly theoretical.

AD

The other thing that makes the debate different is that the providers of these products were calling for regulation before watchdogs and lawmakers were even aware of the technology. The leading company, OpenAI, is already taking many of the steps that are being called for by policymakers, like implementing controls and content moderation to prevent illegal or harmful content.

Complicating the issue is that the term “AI” can have very little meaning. The European AI Act was years in the making, but it was focused almost entirely on automation algorithms that power things like facial recognition and biometrics data. Those bear little resemblance to the massive, general purpose models powering generative AI.

Stanford University has an exhaustive analysis of whether today’s large foundation models comply with Europe’s AI Act. They don’t. But the analysis concludes that complying with the regulations would be possible.

AD

If you look at OpenAI, for instance, it already complies with 25 of the 48 categories, according to the study. But those 25, which include “risks and mitigations,” are some of the most difficult to implement.

Categories where it is not in compliance are easier to change. For instance, OpenAI would need to disclose the carbon impact and size of its models. That means the EU AI Act would actually give OpenAI an advantage over other companies that are further behind in compliance.

After OpenAI met with the European Commission last summer, it offered feedback in the form of four suggestions in a white paper, obtained and published by Time. But they really amounted to clarifying what the law actually meant.

The AI Act also designates certain products as “high risk” but offers certain companies ways out of that designation, by putting in place safeguards. But one provision seemed to incentivize companies to blind themselves to dangers in order to avoid a “high risk” designation. That didn’t seem to be the intent, so OpenAI suggested clarifying.

The bottom line is that regulators and the companies that build foundational AI models are really not that far apart. The bigger stumbling block is how to craft the regulations so that they allow competition in the industry to flourish. If only the big companies have the resources to comply, then they’ll have de facto monopolies in the AI industry.

Title icon

Room for Disagreement

Former EU lawmaker Marietje Schaake argues in the Financial Times that corporations should have little role in the regulatory process. “Lawmakers must remember that businesspeople are principally concerned with profit rather than societal impacts. Policymakers must not let tech CEOs shape and control the narrative, let alone the process,” she wrote.

Title icon

Notable

  • Stanford researchers went through the more than 100 pages of AI Act draft legislation so you don’t have to. This is a comprehensive list of ways in which the law would apply to foundation models like OpenAI’s ChatGPT. It compares different models and explains what they would need to do to come under compliance with the law in its current form.
Title icon

The View From Europe

One of the big conundrums for Europeans regulating AI is that they may inadvertently create moats for the biggest players in the market so far. And none of those companies are based in Europe though there is an abundance of AI talent there.

One idea is to loosen restrictions on open-source AI companies. That would create a more competitive landscape, but might also increase risks, as open-source algorithms are difficult to control.

AD
AD