• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


White House could force cloud companies to disclose AI customers

Sep 22, 2023, 1:59pm EDT
tech
Unsplash/ Tabrez Syed
PostEmailWhatsapp
Title icon

The Scoop

The White House is considering requiring cloud computing firms to report some information about their customers to the U.S. government, according to people familiar with an upcoming executive order on artificial intelligence.

The provision would direct the Commerce Department to write rules forcing cloud companies like Microsoft, Google, and Amazon to disclose when a customer purchases computing resources beyond a certain threshold. The order hasn’t been finalized and specifics of it could still change.

Similar “know-your-customer” policies already exist in the banking sector to prevent money laundering and other illegal activities, such as the law mandating firms to report cash transactions exceeding $10,000.

AD

In this case, the rules are intended to create a system that would allow the U.S. government to identify potential AI threats ahead of time, particularly those coming from entities in foreign countries. If a company in the Middle East began building a powerful large language model using Amazon Web Services, for example, the reporting requirement would theoretically give American authorities an early warning about it.

The policy proposal represents a potential step toward treating computing power — or the technical capacity AI systems need to perform tasks — like a national resource. Mining Bitcoin, developing video games, and running AI models like ChatGPT all require large amounts of compute.

If the measure is finalized, it would be a win for organizations like OpenAI and the RAND Corporation think tank, which have been advocating for similar know-your-customer mechanisms in recent months. Others argue it could amount to a surveillance program if not implemented carefully.

AD

“The details are really going to matter here,” said Klon Kitchen, a nonresident senior fellow at the American Enterprise Institute, where he focuses on national security and emerging technology. “I understand why the administration is trying to get at this issue. We’re going to need a strategic understanding of adversarial development of these models.”

The White House declined to comment. The Department of Commerce directed questions to the White House.

Title icon

Louise’s view

One major challenge for this approach: the amount of computing power it takes to build powerful models like ChatGPT is rapidly falling, thanks to improvements in the algorithms used to train them. By the time the Commerce Department decides on a reporting threshold, it could already be out of date, and trying to make effective updates will be like chasing a moving target.

AD

Instead, Commerce could find other, more qualitative indicators to determine whether an organization’s computing usage is cause for alarm. But that would require cloud firms to extensively spy on their customers, with whom they often have conflicts of interest.

Microsoft, for example, is a major investor in OpenAI. If a promising startup began buying computing resources from Azure to build a ChatGPT competitor, Microsoft would have to report that activity to U.S. authorities under this provision.

Sayash Kapoor, a researcher at Princeton University who studies the societal impacts of AI, noted that this policy would also only apply to one kind of technology: large language models. Other AI tools that have been used for harmful purposes, such as facial recognition algorithms, require far less compute to build and run, meaning they likely wouldn’t meet the threshold. “If we’re looking at it from a harms perspective, I think this is very shortsighted,” Kapoor said.

Title icon

Room for Disagreement

Microsoft, OpenAI, and the RAND Corporation all argue that a know-your-customer provision would help prevent bad actors from building and using AI for nefarious purposes. Microsoft proposed the idea in a recent policy document on governing AI, in which it acknowledged that the specifics around “who should be responsible for collecting and maintaining specific customer data” still needed to be worked out.

OpenAI CEO Sam Altman and two of the company’s top executives proposed creating an international body to monitor AI projects equivalent to the International Atomic Energy Agency governing nuclear power. The new entity would be responsible for evaluating and placing restrictions on any AI effort “above a certain capability.”

Title icon

The View From the U.K.

Think tanks and non-profit organizations concerned about the potential “existential risks” posed by artificial intelligence are gaining influence with British government leaders, Politico reported earlier this month. Connor Leahy, CEO of the AI startup Conjecture, said he met with the House of Lords earlier this week to discuss the issue. One of his policy recommendations: putting a cap on the amount of computing power AI companies can use.

Title icon

Notable

  • President Biden’s administration is concerned about the security risks posed by Chinese cloud computing giants and is exploring what steps it can take to mitigate them, the New York Times reported in June.
  • If we want AI applications to be fair and equitable, academia and civil society need access to computing resources on par with industry, two researchers from Stanford University argued in Semafor earlier this year.
AD