• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Despite the AI safety hype, a new study finds little research on the topic

Apr 3, 2024, 12:33pm EDT
tech
Unsplash/Mohamed Nohassi
PostEmailWhatsapp
Title icon

The Scoop

In public policy conversations about artificial intelligence, “safety research” is one of the biggest topics that has helped drive new regulations around the world.

But according to a new study, there appears to be more talk about safety than hard data.

AI safety accounts for only 2% of overall AI research, according to a new study conducted by Georgetown University’s Emerging Technology Observatory that was shared exclusively with Semafor.

Georgetown found that American scholarly institutions and companies are the biggest contributors to AI safety research, but it pales in comparison to the amount of overall studies into AI, raising questions about public and private sector priorities.

AD

Of the 172,621 AI research papers published by American authors between 2017 and 2021, only 5% were on safety. For China, the difference was even starker, with only 1% of research published focusing on AI safety.

Nevertheless, studies on the topic are on the rise globally, with AI safety research papers more than quadrupling between 2017 and 2022.

Title icon

Know More

Georgetown’s Emerging Technology Observatory is part of The Center for Security and Emerging Technology, which received over $100 million in funding from Open Philanthropy, the charity backed by Facebook co-founder Dustin Moskovitz, who is a major advocate of AI safety.

Moskovitz and the field of AI safety in general are tied to the Effective Altruism movement, which hopes to curb existential risks to humanity, such as runaway AI systems.

AD

The topic of AI safety is a hot button issue in the tech industry and has spawned a counter movement, called Effective Accelerationism, which believes that focusing on the risks of technology does more harm than good by hindering critical progress.

Recently, the definition of AI safety has expanded to include more than just existential risks, such as bias in labor issues. That trend has drawn criticism from some in the AI safety field and praise from others.

The Georgetown researchers who conducted the study decided to include the broader definition of AI safety research, and not just existential risks.

AD

The researchers relied on metadata from a database of 260 million scholarly articles that is maintained by the Emerging Technology Observatory and The Center for Security and Emerging Technology. It defined an AI safety article as one that “subject matter experts would consider highly relevant to the topic,” which requires some judgment calls on the part of the researchers.

Title icon

Reed’s view

As the researchers note, not all safety research comes in the form of a public research paper. Tech companies would argue that AI safety is built into the work they do. And the counterintuitive argument is that researchers have to build advanced AI to understand how to protect against it.

In a recent interview with Lex Fridman, OpenAI CEO Sam Altman said that at some point in the future, AI safety will be “mostly what we think about” at his firm. “More and more of the company thinks about those issues all the time,” he said. Still, OpenAI did not show up as a major contributor to AI research in the Georgetown study.

The Effective Accelerationist argument is that the risks of AI are overblown, and 30,000 AI safety papers over five years sounds significant, considering the nascent nature of this technology. How many papers on automobile safety were written before the Model T was invented and sold?

What makes less sense is proposing stringent AI regulations while not also advocating for a massive increase in grant money for AI research, including funding compute power needed for academics to study massive new AI models.

President Joe Biden’s executive order on AI does include provisions for AI safety research. The Commerce Department’s new AI Safety Institute is one example. And the National Artificial Intelligence Research Resource pilot program aims to add more compute power for researchers.

But these measures don’t even begin to keep up with the advances being made in industry.

Big technology companies are currently constructing supercomputers so enormous they would have been difficult to contemplate a few years ago. They will soon find out what happens when AI models are scaled to unfathomable levels, and they will likely keep those trade secrets close to the vest.

To get their hands on that kind of compute power, AI safety researchers will have to work for those companies.

As the CSET study points out, Google and Microsoft are some of the biggest contributors to published papers on AI safety research.

But much of that research came out of an era before ChatGPT. The consumer interest in generative AI has changed the commercial landscape and we’re now seeing fewer research papers come out of big technology companies, which are mostly keeping breakthroughs behind closed doors.

If elected officials really care about AI safety going forward, they would likely accomplish more by allocating taxpayer dollars to basic AI research than they would by passing a comprehensive AI bill when we know so little about how this technology will change society even five years from now.

Title icon

Room for Disagreement

One argument is that AI safety research is a futile endeavor, and the only way to ensure AI is safe is to pause its development. Tamlyn Hunt argued in this article in Scientific American: “Imagining we can understand AGI/ASI [Artificial General Intelligence and Artificial Super Intelligence], let alone control it, is like thinking a strand of a spider’s web could restrain Godzilla. Any solutions we can develop will be only probabilistic, not airtight. With AGI likely fooming into superintelligence essentially overnight, we can’t accept probabilistic solutions because AI will be so smart it will exploit any tiny hole, no matter how small. (Has the “foom” already happened? Suggestive reports about “Q*” in the wake of the bizarre drama at Open AI in November suggest that foom may be real already.)”

Title icon

Notable

  • AI is potentially so transformative — and destructive — that it is often compared to nuclear weapons. In that analogy, it would be as if the U.S. government allowed the private sector to be entirely responsible for creating the nuclear bomb. In this Salon article, Jacy Reese Anthis argues we need a Manhattan Project for AI.
AD
AD