I just got back from a few days at SXSW. Well-adjusted people go there to listen to music, watch movies, and eat some good barbecue. I spent most of my time there talking with people about AI. On Monday, I interviewed Shane Legg, co-founder and chief AGI scientist at Google DeepMind. That venture’s other two co-founders, Demis Hassabis and Mustafa Suleyman, have bigger public profiles, but Legg stands out in another way. In 2010 when DeepMind was founded, the people who studied existential risk in AI usually weren’t the scientists actually building the technology. Legg was an exception. He was one of the few AI researchers willing to question the safety of the entire endeavor. Now, visions of AGI are beginning to materialize and people like Legg believe it’s about to drastically change our world. On the road to general intelligence, DeepMind’s greatest accomplishments have been in narrow areas. In a proof of concept demonstration, it defeated the world champion of the board game Go. It revolutionized biotech with its protein fold breakthrough. It used AI to get us closer to nuclear fusion and it upended the science of weather prediction. Shane Legg/XSo my question to Legg at the event: If building “general” intelligence is potentially dangerous, why not just keep building these narrow applications, which have already led to so many world-changing results? One response to that is obvious: Somebody’s going to build it, so it might as well be us. But Legg said something else that I thought was interesting. As AI models get bigger, researchers are noticing a principle of crossover skills. The act of learning one language, for instance, allows an AI model to pick up entirely different languages more quickly. You might call it a kind of artificial wisdom. One vision of how AGI might play out, Legg said, is that we’ll have super-intelligence models that, when tasked with a narrow problem, will create software tools to solve it. That’s what humans do it today. What Legg said was bouncing around in my head on Tuesday, when Google DeepMind briefed reporters on a new research project called Sima, an AI agent that learned the fundamentals of playing video games and can now play new games it’s never seen before, without any additional training. Some of the big breakthroughs in AI came from playing chess, Go, and video games. But those AI models were designed to win a particular game — not actually understand them. Sima doesn’t know how to win video games, but there may be power in its ability to generally understand them. Winning could come later. Legg told me he came to believe in 2001 that we would have a 50/50 chance of reaching AGI by 2028. He says that number is still about right. If that’s true, and all these recent breakthroughs are the final leaps toward truly intelligent machines, we are in for a massive disruption in our way of life. “This is a deep, deep change,” Legg said. “To be honest, it’s hard to wrap my head around.” |