• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at how AI experts who worry about existential threats say including algo͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
March 8, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

Since ChatGPT launched in November 2022, we’ve been covering the AI safety movement pretty closely. We followed the “pause” letter from the Future of Life Institute, calling for a six-month break in AI development, before it was too late to stop a threat to humanity.

I interviewed Skype founder Jaan Tallinn about his failure to rein in what he viewed as a dangerous march toward superintelligence.

In other words, AI safety was about AI not killing us. But lately, I’ve found myself writing about it to describe a lot more than just safety in the literal sense of the word. And that vocabularic journey has been on my mind.

What I didn’t realize is that this is no accident. There’s a debate happening in the AI industry and among those who hope to corrall it in various ways. The momentum is to include more issues in the “safety” bucket, using the term as a catch-all for any possible impact of AI.

But pushback is brewing. Some AI issues, such as content moderation, are inherently political. Just look at what happened to Google’s Gemini chatbot and image generator in recent weeks. There’s a worry that politics might distract from or erode non-political AI issues. Read below for more details.

And I’m on my way to SXSW and will be keeping an eye on all things AI while I’m there. Let me know if you have any tips.

Move Fast/Break Things
Reuters/Florence Lo/Illustration/File Photo

➚ MOVE FAST: Two can play. China is raising $27 billion to funnel into its chips industry, which will exceed its last fund. As tech competition intensifies, the U.S. is finally doling out the $50 billion it set aside to boost domestic semiconductor manufacturing. But China is still ahead in government investment, with its latest effort marking its third fund.

➘ BREAK THINGS: Play the market. Reddit will kick off its roadshow next week before its IPO later this month. The price range reported by the FT puts its valuation at about $6.5 billion, compared to $10 billion in 2021. Overall, the IPO markets have been choppy and Reddit’s own troubles, from boycotts to content controversies, mean it’s making a big bet.

PostEmail
Reed Albergotti

What should ‘AI safety’ mean?

THE SCENE

The concept of “AI safety” has been expanding to include everything from the threat of human extinction to algorithmic bias to concerns that AI image generators aren’t diverse.

Researcher Eliezer Yudkowsky, who’s been warning about the risks of AI for more than a decade on podcasts and elsewhere, believes that lumping all of those concerns into one bucket is a bad idea. “You want different names for the project of ‘having AIs not kill everyone’ and ‘have AIs used by banks make fair loans,” he said. “Broadening definitions is usually foolish, because it is usually wiser to think about different problems differently, and AI extinction risk and AI bias risk are different risks.”

Yudkowsky, an influential but controversial figure for his alarmist views on AI (he wrote in Time about the potential necessity of ordering airstrikes on AI data centers), isn’t alone in his views. Others in the AI industry worry that “safety” in AI, which has come to underpin guardrails that companies are implementing, may become politicized as it grows to include hot button social issues like bias and diversity. That could erode its meaning and power, even as it receives huge public and private investment, and unprecedented attention.

“It’s better to just be more precise about the concerns that you have,” said Anthony Aguirre, executive director of the Future of Life Institute, which has long focused on existential risks posed by AI. “If you’re talking about deepfakes, talk about deepfakes. And if you’re talking about the risk of open-source foundation models, talk about that. My guess is we’re maybe at maximum safety coverage before we start using other adjectives.”

Instagram/Screenshot via Decoding The Gurus

What, exactly, should be included under the AI safety umbrella was the subject of a panel discussion Tuesday at the National Artificial Intelligence Advisory Committee, which is made up of business executives, academics and others who advise the White House.

One attendee told Semafor that the meeting reinforced the growing emphasis in the industry on making sure the term encompasses a broader array of harms than just physical threats.

But the attendee said one worry with the expanded definition is that it lumps inherently political concepts like content moderation in with non-political issues like mitigating the risk of bioweapons. The risk is that AI safety becomes synonymous with what conservatives view as “woke AI.”

For most of the past decade, the term “AI safety” was used colloquially by a small group of people concerned about the largely theoretical, catastrophic risks of artificial intelligence. And until recently, people working in other fields focused on issues like algorithmic bias and surveillance viewed themselves as working in entirely separate fields.

Now, as generative AI products like ChatGPT have put the technology on the forefront of every industry, AI safety is becoming an umbrella term that lumps nearly every potential downside of software automation into a single linguistic bucket. And decisions like which ethnicities AI image generators should include are considered part of AI safety.

The newly created government agency, the AI Safety Institute, for example, includes in its mandate everything from nuclear weapons to privacy to workforce skills. And Google recently folded 90% of an AI ethics group, called the Responsible Innovation Team, into its Trust and Safety team, a company spokesman told Wired.

Mike Solana, a vice president at Founders Fund and author of the newsletter Pirate Wires, is a frequent critic of content moderation policies at big tech companies.

“The purposeful confusion of terms here is totally absurd,” he said. “Obviously there is a difference between mitigating existential risk for human civilization and the question of whether or not there are enough Muslim women in headscarves turning up when you try to generate a picture of an auto mechanic. But that confusion of terms benefits people obsessed with the DEI stuff, and they’re very good at navigating bureaucracies, so here we are.”

Reed's view on why AI has a longstanding vocab problem. →

PostEmail
Live Journalism

Lael Brainard, Director of the White House National Economic Council; Xavier Becerra, U.S. Secretary of Health and Human Service; Julie Sweet, CEO Accenture and David Zapolsky SVP, Global Public Policy & General Counsel, Amazon have joined the world class line-up of global economic leaders for the 2024 World Economy Summit, taking place in Washington, D.C. on April 17-18. See all speakers, sessions & RSVP here.

PostEmail
Semafor Stat

The amount of money Chinese online retailer Temu spent on advertising on Meta’s platforms last year, according to the Wall Street Journal. Temu, which became popular in the U.S. by offering goods at a cheap price, has mounted a massive marketing campaign in America. It advertised during the recent Super Bowl and was also a top five spender at Google.

PostEmail
Friends of Semafor

The future is here – stay ahead of it with Gizmodo. Founded in 2002 as one of the internet’s very first independent tech news sites, Gizmodo’s free newsletter brings you comprehensive coverage on the biggest and most important businesses driving tech innovation such as Nvidia, Tesla, and Microsoft. Add Gizmodo to your media diet – subscribe for free here.

PostEmail
Watchdogs

TikTok gained a strange ally in its fight in Washington. Through its app, the firm urged users to contact their member of Congress to voice opposition to a plan that would force parent company ByteDance to divest of TikTok or the service could be banned from app stores.

Backers of the bill called it an intimidation campaign after representatives’ offices were flooded with calls. The plan ended up passing out of the House Energy and Commerce Committee in a rare 50-0 vote. Now it’s on a fast track to the full U.S. House for a vote next week

Enter Donald Trump. The former president also tried to ban TikTok and failed. But last night, he said if Congress makes a similar move, Facebook and “Zuckerschmuck” will double their business. But our Principals colleagues are reporting that Republicans are moving forward, defying their presumptive presidential nominee.

Truth Social/screenshot

PostEmail
What We’re Tracking

After years of Silicon Valley proudly spurning the U.S. military, the defense tech revolution has arrived in full force. In the Middle East, AI is being used to identify targets for air strikes, the Pentagon has launched the Replicator initiative in a bid to procure thousands of cheap, autonomous drones, and over 800 AI projects are underway within the U.S. military.

It will also have a big presence at SXSW, with dozens of defense officials pitching working with their department.

“So many senior decision-makers have said it’s important for us to have people on the ground talking to venture capitalists, talking to startups,” said Lieutenant Gray Chynoweth, deputy director for strategy and engagement at the Navy’s innovation hub, NavalX. “We don’t just want to be here to eat barbecue.”

Mazen Mahdi/AFP via Getty Images

The Navy is bringing its biggest ever presence to the festival, Chynoweth told Semafor, and will be hosting senior officials from the CIA-backed venture capital firm In-Q-Tel, Pentagon’s R&D agency DARPA, and the Defense Innovation Unit.

The Navy’s wishlist at SXSW are capabilities for a maritime internet-of-things, unmanned surface and underwater vehicles, and AI software to crunch the torrent of data the Navy is collecting. “We’re able to present these problems to all these brilliant startups that are looking for a way to do what they love doing, which is solve problems for users,” Chynoweth said.

“What you’re going to find uniformly across the innovation ecosystem in the DoD is that we’re excited about the companies,” Chynoweth said. “That’s what’s going to wake us up early, that’s what’s going to keep us up late.”

— Mathias Hammer

PostEmail
Hot on Semafor
PostEmail