• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Sep 15, 2023, 11:53am EDT
tech

The Princeton researchers calling out ‘AI snake oil’

Semafor/Al Lucca
PostEmailWhatsapp
Title icon

The Scene

In July, a new study about ChatGPT started going viral on social media, which seemed to validate growing suspicions about how the chatbot had gotten “dumber” over time. As often happens in these circumstances, Arvind Narayanan and Sayash Kapoor stepped in as the voices of reason.

The Princeton computer science professor and Ph.D candidate, respectively, are the authors of the popular newsletter and soon-to-be book AI Snake Oil, which exists to “dispel hype, remove misconceptions, and clarify the limits of AI.”

The pair quickly put out a newsletter explaining how the paper’s findings had been grossly oversimplified. It was part of a series of similar articles Narayanan and Kapoor have published, filled with balanced criticism of AI and ideas for how to mitigate its harms. But don’t call them technophobes: One of the most charming things Narayanan has written about is how he built a ChatGPT voice interface for his toddler.

AD

In the edited conversation below, we talked to Narayanan and Kapoor about transparency reporting, disinformation, and why they are confident AI doesn’t pose an existential risk to humanity.

Title icon

The View From Arvind Narayanan and Sayash Kapoor

Q: How can consumers quickly evaluate whether a new AI company is selling snake oil or actually offering a reasonable application of this technology?

Narayanan: The key distinction that we make is between predictive AI and generative AI. In our view, most of the snake oil is concentrated in predictive AI. When we say snake oil, it’s an AI that doesn’t work at all — not just an AI that doesn’t live up to its hype; there’s certainly some of that going on in generative AI.

AD

You have AI hiring tools, for instance, which screen people based on questions like, “Do you keep your desk clean?” or by analyzing their facial expressions and voice. There’s no basis to believe that kind of prediction has any statistical validity at all. There have been zero studies of these tools, because researchers don’t have access and companies are not publishing their data.

We very strongly suspect that there are entire sectors like this that are just selling snake oil. And it’s not just companies, there’s a lot of snake oil in academia as well. There was this paper that claims to predict whether a psychology study will replicate or not using machine learning. This paper really has basically all the pitfalls that we could think of, and I would very much call it snake oil. It’s claiming that you can predict the future using AI — that’s the thing that grinds our gears the most.

Q: One of the big worries about generative AI is that it will create a flood of disinformation. But you argue that some of the solutions being proposed, such as adding watermarks to AI-generated images, won’t work. Why?

AD

Kapoor: First, when we look at disinformation itself, it takes the focus away from where solutions actually work — for instance, information integrity efforts on social media. If you recall the Pentagon hoax, the entire reason that was successful to some extent was because it was spread by a verified Twitter account. The photo was of a fake Pentagon bombing. It has clear visual artifacts of AI, the fences were blending into each other — it’s just a really shoddy job. If we focus on watermarking and the role of AI in spreading disinformation, I think we lose sight of this bigger picture.

The other part is AI genuinely does lead to these new types of harms, which are, in our view, much more impactful compared to disinformation. One example is non-consensual deepfakes. This is an area where you don’t need information to spread virally for it to cause harm. You can have a targeted campaign that attacks just one individual, and it will cause immense psychological, emotional, and financial damage. It’s a problem that we feel is relatively unaddressed compared to all of the attention that disinformation is getting.

Q: You argue that AI companies should start publishing regular transparency reports, the same way social media giants like YouTube and Meta do. Why is that a good idea?

Kapoor: I don’t think there’s going to be one set of transparency standards that apply to all language models and then we’re done. I think the process has to be iterative, it has to take into account people’s feedback, and it has to improve over time. With that said, I think one reason why social media is useful is because it can provide us with an initial set of things that companies should report. As we pointed out recently, the entire debate about the harms of AI is happening in a data vacuum. We don’t know how often people are using ChatGPT for medical advice, legal advice, or financial advice. We don’t know how often it outputs hate speech, how often it defames people, and so on.

The first step towards understanding that is publishing user reports. This might seem like it’s technically infeasible — how do you understand how more than, say, 200 million people are using your platform? But again, in social media, this has already been done. Facebook releases these quarterly reports, which outline how much hate speech there is on the platform, how many people reported comments, and how many of those comments were taken down. I think that can be a great model as a starting point for foundational model providers.

Q: People often learn about new AI research by reading pre-print studies on open access archive arXiv.org. But the studies haven’t been peer-reviewed, and sometimes low-quality ones go viral or good ones are misinterpreted. Should there be better controls on arXiv.org?

Kapoor: I do think there are genuine concerns about arXiv papers being misinterpreted, and just the overall pace of research and how it affects research communities. Many people have said that maybe we should discredit anything that hasn’t been peer-reviewed — I wholeheartedly disagree with that. We do need to improve our scientific processes, we need to improve peer review, but at the same time, it’s not necessarily true that everything that’s peer-reviewed also does not suffer from errors.

In our own research on studies that use machine learning, we found top papers published in top journals also tended to suffer from reproducibility issues. So while peer review is still important, I think an overreliance on it can also be harmful. ArXiv.org helps reduce gatekeeping in academia. Peer review tends to favor research that fits within established norms — arXiv can help level the playing field for people who are doing research that’s outside the box.

Q: Wealthy donors are pouring millions of dollars into organizations promoting the idea that artificial intelligence presents an existential risk to humanity. Is that true?

Narayanan: There are just so many fundamental flaws in the argument that x-risk [existential risk] is so serious that we need urgent action on it. We’re calling it a “tower of fallacies.” I think there’s just fallacies on every level. One is this idea that AGI is coming at us really fast, and a lot of that has been based on naive extrapolations of trends in the scaling up of these models. But if you look at the technical reality, scaling has already basically stopped yielding dividends. A lot of the arguments that this is imminent just don’t really make sense.

Another is that AI is going to go rogue, it’s going to have its own agency, it’s going to do all these things. Those arguments are being offered without any evidence by extrapolating based on [purely theoretical] examples. Whatever risks there are from very powerful AI, they will be realized earlier from people directing AI to do bad things, rather than from AI going against its programming and developing agency on its own.

So the basic question is, how are you defending against hacking or tricking these AI models? It’s horrifying to me that companies are ignoring those security vulnerabilities that exist today and instead smoking their pipes and speculating about a future rogue AI. That has been really depressing.

And the third really problematic thing about this is that all of the interventions that are being proposed will only increase every possible risk, including existential risks. The solution they propose is to concentrate power in the hands of a few AI companies.

Q: Is x-risk actually a big concern in the AI research community? Are you fielding questions about it from new students?

Narayanan: I think the median AI researcher is still interested in doing cool technical things and publishing stuff. I don’t think they are dramatically shifting their research because they’re worried about existential risk. A lot of researchers consider it intellectually interesting to work on alignment, but even among them, I don’t necessarily know that the majority think that x-risk is an imminent problem. So in that sense, what you’re seeing in the media exaggerates what’s actually going on in the AI research community.

Kapoor: I definitely agree that the median AI researcher is far from the position that x-risk is imminent. That said, I do think there are some selection effects. For instance, a lot of effective altruism organizations have made AI x-risk their top cause in the last few years. That means a lot of the people who are getting funding to do AI research are naturally inclined, but also have been specifically selected, for their interest in reducing AI x-risk.

I’m an international student here, and one of the main sources of fellowships is Open Philanthropy. Over the last five years or so, they have spent over $200 million on AI x-risk specifically. When that kind of shift happens, I think there’s also a distortion that happens. So even if we have a large number of people working on AI x-risk, it does not really mean that this interest arose organically. It has been very strategically funded by organizations that make x-risk a top area of focus.

Semafor Logo
AD