THE SCENE Jaan Tallinn used the fortune he made selling Skype in 2009 to invest in AI companies like Anthropic and DeepMind — not because he was excited about the future of artificial intelligence — but because he believed the technology was a threat. By funneling more than $100 million in more than 100 startups, the billionaire hoped he could steer its development toward human safety. “My philosophy has been that I want to displace money that doesn’t care,” he said in an interview, describing his strategy, which he now believes was doomed. “Plan A failed. There is a dissonance between privately being concerned and then publicly trying to avoid any steps that would address the issue.” Tallinn, the 51-year-old computer programmer who lives in Tallinn, Estonia, said in the interview via Skype he was disappointed that Anthropic and other AI labs he has funded didn’t sign on to a recent open letter, which implored the artificial intelligence industry to take a six-month pause on new research. It was organized by the Future of Life Institute, which Tallinn co-founded, and included prominent signatories like Elon Musk. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. Anthropic co-founder Jack Clark said the company, which recently received a $300 million investment from Google, does not sign petitions as a matter of policy. “We think it’s helpful that people are beginning to debate different approaches to increasing the safety of AI development and deployment,” the company said in a statement. Still, of all the firms on the forefront of AI development, Tallinn believes Anthropic is the most safety-conscious, creating breakthrough guardrails such as “Constitutional AI,” which constrains AI models with strict operating instructions. Tallinn said Anthropic could have released its chatbot, Claude, much earlier but decided to wait to address safety concerns. Anthropic has also supported the idea of government oversight of the AI industry. But it and other major players like OpenAI are advancing the technology so quickly that Tallinn believes even conscientious companies have lost the ability to keep AI from spiraling out of control. Read more on his thoughts here. Semafor/Joey PfeiferREED’S VIEW It’s hardly a guarantee that the large language models being developed by OpenAI, Anthropic and others will lead to world-killing superintelligence. But there’s pretty good evidence that, even if that were the case, we’d be unable to stop it with any kind of government regulation. Generating killer robots may, in the near future, require nothing more than a laptop. How are you going to stop that? AI would be easier to control if the U.S. government was at the forefront of its development. To do that, Uncle Sam would need to hire top AI talent to work at national labs. Those are muscles that the government lost when the Cold War ended, but there’s no reason it couldn’t get them back. The golden age of more responsible technological innovation in the U.S. came after World War II, when very smart people in government worked hand-in-hand with very smart people in the private sector. It lasted for half a century. That’s a secret sauce worth recreating and, because the government does not have a profit motive, it might be more effective at controlling AI than politicians trying to pass laws. ROOM FOR DISAGREEMENT Alondra Nelson, who helped author the Blueprint for an AI Bill of Rights, argues here that there is a lot that the government has already done and could do in the future to address all the risks associated with AI: “It will require asking for real accountability from companies, including transparency into data collection methods, training data, and model parameters. And it will also require more public participation — not simply as consumers being brought in as experimental subjects for new generative AI releases but, rather, creating pathways for meaningful engagement in the development process prerelease,” she wrote. NOTABLE - In February 2020, tensions at OpenAI were spilling out into the open, as this profile details.
|