THE SCOOP The White House is considering requiring cloud computing firms to report some information about their customers to the U.S. government, according to people familiar with an upcoming executive order on artificial intelligence. The provision would direct the Commerce Department to write rules forcing cloud companies like Microsoft, Google, and Amazon to disclose when a customer purchases computing resources beyond a certain threshold. The order hasn’t been finalized and specifics of it could still change. Similar “know-your-customer” policies already exist in the banking sector to prevent money laundering and other illegal activities, such as the law mandating firms to report cash transactions exceeding $10,000. In this case, the rules are intended to create a system that would allow the U.S. government to identify potential AI threats ahead of time, particularly those coming from entities in foreign countries. If a company in the Middle East began building a powerful large language model using Amazon Web Services, for example, the reporting requirement would theoretically give American authorities an early warning about it. The policy proposal represents a potential step toward treating computing power — or the technical capacity AI systems need to perform tasks — like a national resource. Mining Bitcoin, developing video games, and running AI models like ChatGPT all require large amounts of compute. If the measure is finalized, it would be a win for organizations like OpenAI and the RAND Corporation think tank, which have been advocating for similar know-your-customer mechanisms in recent months. Others argue it could amount to a surveillance program if not implemented carefully. “The details are really going to matter here,” said Klon Kitchen, a nonresident senior fellow at the American Enterprise Institute, where he focuses on national security and emerging technology. “I understand why the administration is trying to get at this issue. We’re going to need a strategic understanding of adversarial development of these models.” The White House declined to comment. The Department of Commerce directed questions to the White House. Unsplash/ Tabrez SyedLOUISE’S VIEW One major challenge for this approach: the amount of computing power it takes to build powerful models like ChatGPT is rapidly falling, thanks to improvements in the algorithms used to train them. By the time the Commerce Department decides on a reporting threshold, it could already be out of date, and trying to make effective updates will be like chasing a moving target. Instead, Commerce could find other, more qualitative indicators to determine whether an organization’s computing usage is cause for alarm. But that would require cloud firms to extensively spy on their customers, with whom they often have conflicts of interest. Microsoft, for example, is a major investor in OpenAI. If a promising startup began buying computing resources from Azure to build a ChatGPT competitor, Microsoft would have to report that activity to U.S. authorities under this provision. Sayash Kapoor, a researcher at Princeton University who studies the societal impacts of AI, noted that this policy would also only apply to one kind of technology: large language models. Other AI tools that have been used for harmful purposes, such as facial recognition algorithms, require far less compute to build and run, meaning they likely wouldn’t meet the threshold. “If we’re looking at it from a harms perspective, I think this is very shortsighted,” Kapoor said. For Room for Disagreement and the rest of the story, read here. → |
|