• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we talk to Arvind Narayanan and Sayash Kapoor, who are more afraid of shoddy so͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
September 15, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Louise Matsakis
Louise Matsakis

Hi, and welcome back to Semafor Tech. The AI industry is moving at a breakneck pace, and it’s often difficult to figure out what’s happening and what it means. When I sort through the deluge of new information each week, Arvind Narayanan and Sayash Kapoor frequently stand out as voices of reason.

The Princeton computer science professor and Ph.D candidate, respectively, are the authors of the popular newsletter and soon-to-be book AI Snake Oil, which exists to “dispel hype, remove misconceptions, and clarify the limits of AI.”

Their takes on some of the most controversial issues in the sector — from disinformation to existential risk — are refreshing and thoughtful. But what makes their work especially compelling is the fact that neither Narayanan nor Kapoor are doomsayers about AI. They acknowledge the technology’s benefits, but also believe that constructive criticism is a crucial part of human progress. Read my interview with them below, in which we discuss where AI research funding comes from, why OpenAI should publish transparency reports, and the pitfalls of peer review.

And to stay on top of all the climate news next week from the UN General Assembly meeting and New York Climate Week, Semafor Net Zero will be publishing a special daily newsletter for the action-packed gatherings. Sign up here.

Move Fast/Break Things
Reuters/Brendan McDermid

➚ MOVE FAST: Hold ‘em. Investors are betting big on chips designer Arm, whose products power iPhones and other advanced consumer electronics. Shares soared 25% when the company went public Thursday, prompting Instacart to raise its IPO price range.

➘ BREAK THINGS: Fold ‘em. A pair of major cyber attacks hit casino giants MGM Resorts and Caesars. The house lost at the latter, to the tune of a $15 million ransom.

PostEmail
Artificial Flavor

AI diffusion models are better known as the software that powers image-generation tools like Stable Diffusion and DALL-E 2. Microsoft researchers published a paper detailing their work using such models for generating new proteins. This isn’t the first study covering the use of diffusion models in AI drug discovery. But it’s notable that it came out of Microsoft. Lately, it seems every company is doing some work in AI and biotech.

PostEmail
Q&A
Semafor/Al Lucca

Princeton computer science professor Arvind Narayanan and Ph.D candidate Sayash Kapoor are the authors of the popular newsletter and soon-to-be book AI Snake Oil.

Q: How can consumers quickly evaluate whether a new AI company is selling snake oil or actually offering a reasonable application of this technology?

Narayanan: The key distinction that we make is between predictive AI and generative AI. In our view, most of the snake oil is concentrated in predictive AI. When we say snake oil, it’s an AI that doesn’t work at all — not just an AI that doesn’t live up to its hype; there’s certainly some of that going on in generative AI.

You have AI hiring tools, for instance, which screen people based on questions like, “Do you keep your desk clean?” or by analyzing their facial expressions and voice. There’s no basis to believe that kind of prediction has any statistical validity at all. There have been zero studies of these tools, because researchers don’t have access and companies are not publishing their data.

We very strongly suspect that there are entire sectors like this that are just selling snake oil. And it’s not just companies, there’s a lot of snake oil in academia as well. There was this paper that claims to predict whether a psychology study will replicate or not using machine learning. This paper really has basically all the pitfalls that we could think of, and I would very much call it snake oil. It’s claiming that you can predict the future using AI — that’s the thing that grinds our gears the most.

Q: Wealthy donors are pouring millions of dollars into organizations promoting the idea that artificial intelligence presents an existential risk to humanity. Is that true?

Narayanan: There are just so many fundamental flaws in the argument that x-risk [existential risk] is so serious that we need urgent action on it. We’re calling it a “tower of fallacies.” I think there’s just fallacies on every level. One is this idea that AGI is coming at us really fast, and a lot of that has been based on naive extrapolations of trends in the scaling up of these models. But if you look at the technical reality, scaling has already basically stopped yielding dividends. A lot of the arguments that this is imminent just don’t really make sense.

Another is that AI is going to go rogue, it’s going to have its own agency, it’s going to do all these things. Those arguments are being offered without any evidence by extrapolating based on [purely theoretical] examples. Whatever risks there are from very powerful AI, they will be realized earlier from people directing AI to do bad things, rather than from AI going against its programming and developing agency on its own.

So the basic question is, how are you defending against hacking or tricking these AI models? It’s horrifying to me that companies are ignoring those security vulnerabilities that exist today and instead smoking their pipes and speculating about a future rogue AI. That has been really depressing.

And the third really problematic thing about this is that all of the interventions that are being proposed will only increase every possible risk, including existential risks. The solution they propose is to concentrate power in the hands of a few AI companies.

Q: Is x-risk actually a big concern in the AI research community? Are you fielding questions about it from new students?

Narayanan: I think the median AI researcher is still interested in doing cool technical things and publishing stuff. I don’t think they are dramatically shifting their research because they’re worried about existential risk. A lot of researchers consider it intellectually interesting to work on alignment, but even among them, I don’t necessarily know that the majority think that x-risk is an imminent problem. So in that sense, what you’re seeing in the media exaggerates what’s actually going on in the AI research community.

Kapoor: I definitely agree that the median AI researcher is far from the position that x-risk is imminent. That said, I do think there are some selection effects. For instance, a lot of effective altruism organizations have made AI x-risk their top cause in the last few years. That means a lot of the people who are getting funding to do AI research are naturally inclined, but also have been specifically selected, for their interest in reducing AI x-risk.

I’m an international student here, and one of the main sources of fellowships is Open Philanthropy. Over the last five years or so, they have spent over $200 million on AI x-risk specifically. When that kind of shift happens, I think there’s also a distortion that happens. So even if we have a large number of people working on AI x-risk, it does not really mean that this interest arose organically. It has been very strategically funded by organizations that make x-risk a top area of focus.

For the rest of the conversation, read here.

PostEmail
Semafor Stat

Decrease in the number of average daily posts and comments on most major subreddits in 2023 compared to last year, according to an analysis by the tech newsletter Garbage Day. The data indicates that recent protests over Reddit’s API changes, which led many of the site’s top forums to temporarily go dark, may have had a larger impact on user behavior than previously known.

Unsplash/Erik Mclean
PostEmail
China Window

It’s been 12 months since Chinese e-commerce giant Pinduoduo set up shop in the United States, where its international subsidiary, Temu, has become widely popular. The app has been downloaded about 100 million times in the U.S. since last September and often occupies the number one spot in local app stores, while its website was visited roughly 286 million times in the last month, according to data from Sensor Tower and Similarweb.

Even USPS workers say they are sick of hauling around Temu’s signature bright orange packages. “I’m tired of this Temu shit, ya’ll killing me,” one mailman says in a TikTok video that has been liked two million times. “Everyday it’s Temu, Temu, Temu — I’m Temu tired.”

Behind Temu’s overnight success is a relentless, expensive marketing campaign, which included a Super Bowl TV ad. In the first month after Temu launched, it spent a whopping $140 million on advertising and promotions, according to one estimate. Millions of people are buying stuff from the company, but they don’t necessarily like or trust it. If you search for Temu on TikTok, for instance, some of the most popular related terms are “scamming explained” and “dangerous.”

U.S. shoppers are typically more skeptical than their Chinese counterparts about websites that sell products for what look like impossibly low prices. And Temu’s corporate secrecy has arguably made that issue worse. The company usually doesn’t respond to inquiries from journalists, and shares few details about itself online. Influencers have stepped into the information vacuum, accusing Temu and its products of being malicious or fraudulent. If Temu wants people to keep buying after its marketing budget runs out, it will likely need to become more transparent.

PostEmail
Hot On Semafor
  • Why these three countries in the Congo Basin pose the highest risk for Africa’s next coup.
  • Dealmaking is dead and despite rosy forecasts from bankers whose bonuses depend on it, conditions aren’t great for a rebound.
  • In books, Biden is an energetic leader. Too bad nobody reads them.
PostEmail