Dwarkesh Patel/YouTubeYet another AI researcher went on Dwarkesh Patelās podcast and proclaimed the age of scaling is essentially over. This time it was Ilya Sutskever, the OpenAI co-founder who now runs Safe Superintelligence, a well-funded and very secretive AI firm. Sutskever gave the world a hint of what heās working on at SSI. One concept he threw out is that some human skills could be more hard-coded by evolution than we often admit, which explains why we can understand the visual world ā or do things like deftly grabbing a fragile object ā with little training, but computers canāt. Another concept is that humans have mysterious āvalue functions,ā essentially incentives that motivate our actions. He mentioned research in which a person with a stroke lost the ability to feel any emotion. And for some reason, that person became bad at making any decisions, suggesting that emotions act as some kind of important value function, allowing the brain to work reliably. How, exactly, Sutskever plans to recreate these concepts in computers, he isnāt saying. But hereās what he said: āItās a question I have a lot of opinions about. But unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them. Thereās probably a way to do it. I think it can be done. The fact that people are like that, I think ā itās a proof that it can be done.ā |
|