THE SCENE The concept of “AI safety” has been expanding to include everything from the threat of human extinction to algorithmic bias to concerns that AI image generators aren’t diverse. Researcher Eliezer Yudkowsky, who’s been warning about the risks of AI for more than a decade on podcasts and elsewhere, believes that lumping all of those concerns into one bucket is a bad idea. “You want different names for the project of ‘having AIs not kill everyone’ and ‘have AIs used by banks make fair loans,” he said. “Broadening definitions is usually foolish, because it is usually wiser to think about different problems differently, and AI extinction risk and AI bias risk are different risks.” Yudkowsky, an influential but controversial figure for his alarmist views on AI (he wrote in Time about the potential necessity of ordering airstrikes on AI data centers), isn’t alone in his views. Others in the AI industry worry that “safety” in AI, which has come to underpin guardrails that companies are implementing, may become politicized as it grows to include hot button social issues like bias and diversity. That could erode its meaning and power, even as it receives huge public and private investment, and unprecedented attention. “It’s better to just be more precise about the concerns that you have,” said Anthony Aguirre, executive director of the Future of Life Institute, which has long focused on existential risks posed by AI. “If you’re talking about deepfakes, talk about deepfakes. And if you’re talking about the risk of open-source foundation models, talk about that. My guess is we’re maybe at maximum safety coverage before we start using other adjectives.” Instagram/Screenshot via Decoding The GurusWhat, exactly, should be included under the AI safety umbrella was the subject of a panel discussion Tuesday at the National Artificial Intelligence Advisory Committee, which is made up of business executives, academics and others who advise the White House. One attendee told Semafor that the meeting reinforced the growing emphasis in the industry on making sure the term encompasses a broader array of harms than just physical threats. But the attendee said one worry with the expanded definition is that it lumps inherently political concepts like content moderation in with non-political issues like mitigating the risk of bioweapons. The risk is that AI safety becomes synonymous with what conservatives view as “woke AI.” For most of the past decade, the term “AI safety” was used colloquially by a small group of people concerned about the largely theoretical, catastrophic risks of artificial intelligence. And until recently, people working in other fields focused on issues like algorithmic bias and surveillance viewed themselves as working in entirely separate fields. Now, as generative AI products like ChatGPT have put the technology on the forefront of every industry, AI safety is becoming an umbrella term that lumps nearly every potential downside of software automation into a single linguistic bucket. And decisions like which ethnicities AI image generators should include are considered part of AI safety. The newly created government agency, the AI Safety Institute, for example, includes in its mandate everything from nuclear weapons to privacy to workforce skills. And Google recently folded 90% of an AI ethics group, called the Responsible Innovation Team, into its Trust and Safety team, a company spokesman told Wired. Mike Solana, a vice president at Founders Fund and author of the newsletter Pirate Wires, is a frequent critic of content moderation policies at big tech companies. “The purposeful confusion of terms here is totally absurd,” he said. “Obviously there is a difference between mitigating existential risk for human civilization and the question of whether or not there are enough Muslim women in headscarves turning up when you try to generate a picture of an auto mechanic. But that confusion of terms benefits people obsessed with the DEI stuff, and they’re very good at navigating bureaucracies, so here we are.” Reed's view on why AI has a longstanding vocab problem. → |
|