Juan M. Lavista Ferres is the corporate vice president and chief data scientist of the AI for Good Lab at Microsoft. His new book, co-written by colleague William Weeks, is AI for Good. Q: What are some ways people are using [AI] that show the benefits? A: Remember that 4 billion people in the world do not have access to doctors. That’s half of the world. Currently, the only solution we have right now as a society is to make sure that the doctors we have are more productive, so we can help get to more people. There’s one initiative that we’re working on now in Mexico and Colombia focusing on retinopathy of prematurity, one of the leading causes of blindness in children. It affects babies who are born prematurely. There are only 200,000 ophthalmologists in the world and millions of babies born prematurely. It’s physically impossible to diagnose the disease for every baby. We’ve got an AI model that runs on a phone that can diagnose the disease as well as an ophthalmologist. Q: The AI landscape is moving so fast. Do you think within a year, all of these examples in your book will look old? A: Not so much. We have to remember that AI is not new. We’ve been working with some of these algorithms for the last 20 or 30 years. Even deep learning and neural nets have been around for over 30 years. What has dramatically changed is our ability to train these very big, large language models. When I started working in AI 20 years ago, I realized that natural language processing is even more difficult than working with images. Text is a really difficult problem. In large language models, it’s been a step function. If you had asked me 10 years ago, I would not have thought we’d get here this fast. There are a lot of huge problems that we’re revisiting because before we couldn’t solve them, and now we can. But there’s still a lot of problems that haven’t changed. And we can still solve with things we were doing five years ago. So we’ll still run classification models in 15 years or 20 years. MicrosoftQ: Five years from now, do you see superintelligent, general purpose models — some people might call it AGI -— helping to solve these problems around the world? A: I usually try to stay away from the AGI conversation in general. I think these models will continue to become better. The reason I shy away from the AGI conversation is that the way people define AGI is through tests. One of the tests is the IKEA test. We used to have the Turing test. And of course a lot of LLMs will pass the Turing test. The IKEA test is when a [robotic AI] agent will go to your house, open a box and assemble furniture. I would not pass that test. I focus on: We have this technology. It can be used to solve problems. The discussion should be about that. These models like GPT are much more general now. We used to train very specific models. Now we have zero shot learning, where we don’t need a training set. Just put the information there and the models are able to solve problems that before, we would need to train models for. That’s already showing value from a general purpose perspective. I’m still focusing on: We have a problem, we have a solution, we have a tool to solve a problem. Models will continue to keep improving. Clearly what we saw in November 2022, it was a step function. But I don’t expect another big step function. Check out the rest of the conversation, including whether Lavista Ferres used AI to write the book. → |
|