Semafor/Al Lucca Princeton computer science professor Arvind Narayanan and Ph.D candidate Sayash Kapoor are the authors of the popular newsletter and soon-to-be book AI Snake Oil. Q: How can consumers quickly evaluate whether a new AI company is selling snake oil or actually offering a reasonable application of this technology? Narayanan: The key distinction that we make is between predictive AI and generative AI. In our view, most of the snake oil is concentrated in predictive AI. When we say snake oil, it’s an AI that doesn’t work at all — not just an AI that doesn’t live up to its hype; there’s certainly some of that going on in generative AI. You have AI hiring tools, for instance, which screen people based on questions like, “Do you keep your desk clean?” or by analyzing their facial expressions and voice. There’s no basis to believe that kind of prediction has any statistical validity at all. There have been zero studies of these tools, because researchers don’t have access and companies are not publishing their data. We very strongly suspect that there are entire sectors like this that are just selling snake oil. And it’s not just companies, there’s a lot of snake oil in academia as well. There was this paper that claims to predict whether a psychology study will replicate or not using machine learning. This paper really has basically all the pitfalls that we could think of, and I would very much call it snake oil. It’s claiming that you can predict the future using AI — that’s the thing that grinds our gears the most. Q: Wealthy donors are pouring millions of dollars into organizations promoting the idea that artificial intelligence presents an existential risk to humanity. Is that true? Narayanan: There are just so many fundamental flaws in the argument that x-risk [existential risk] is so serious that we need urgent action on it. We’re calling it a “tower of fallacies.” I think there’s just fallacies on every level. One is this idea that AGI is coming at us really fast, and a lot of that has been based on naive extrapolations of trends in the scaling up of these models. But if you look at the technical reality, scaling has already basically stopped yielding dividends. A lot of the arguments that this is imminent just don’t really make sense. Another is that AI is going to go rogue, it’s going to have its own agency, it’s going to do all these things. Those arguments are being offered without any evidence by extrapolating based on [purely theoretical] examples. Whatever risks there are from very powerful AI, they will be realized earlier from people directing AI to do bad things, rather than from AI going against its programming and developing agency on its own. So the basic question is, how are you defending against hacking or tricking these AI models? It’s horrifying to me that companies are ignoring those security vulnerabilities that exist today and instead smoking their pipes and speculating about a future rogue AI. That has been really depressing. And the third really problematic thing about this is that all of the interventions that are being proposed will only increase every possible risk, including existential risks. The solution they propose is to concentrate power in the hands of a few AI companies. Q: Is x-risk actually a big concern in the AI research community? Are you fielding questions about it from new students? Narayanan: I think the median AI researcher is still interested in doing cool technical things and publishing stuff. I don’t think they are dramatically shifting their research because they’re worried about existential risk. A lot of researchers consider it intellectually interesting to work on alignment, but even among them, I don’t necessarily know that the majority think that x-risk is an imminent problem. So in that sense, what you’re seeing in the media exaggerates what’s actually going on in the AI research community. Kapoor: I definitely agree that the median AI researcher is far from the position that x-risk is imminent. That said, I do think there are some selection effects. For instance, a lot of effective altruism organizations have made AI x-risk their top cause in the last few years. That means a lot of the people who are getting funding to do AI research are naturally inclined, but also have been specifically selected, for their interest in reducing AI x-risk. I’m an international student here, and one of the main sources of fellowships is Open Philanthropy. Over the last five years or so, they have spent over $200 million on AI x-risk specifically. When that kind of shift happens, I think there’s also a distortion that happens. So even if we have a large number of people working on AI x-risk, it does not really mean that this interest arose organically. It has been very strategically funded by organizations that make x-risk a top area of focus. For the rest of the conversation, read here. |