• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we talk to Jaan Tallinn about why he was an early investor in companies like Dee͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
snowstorm Tallinn
snowstorm Washington, D.C.
cloudy Wuhan
rotating globe
April 28, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Louise Matsakis and me that gives an inside look at the struggle for the future of the tech industry.

I stayed up late one night this week to talk to Jaan Tallinn, the Estonian co-founder of Skype who is the most interesting person in AI you have probably never heard of. The conversation was so fascinating I decided to turn it into an article.

Even if you’re not familiar with Tallinn, you probably heard of the letter penned by his organization, The Future of Life Institute. It called for a six-month moratorium on AI development and Elon Musk signed it.

Tallinn has been terrified of the potential for AI to destroy humanity since 2009. Since then, his life’s work has been putting money into AI companies, hoping he could steer it in a safe direction. But now, he’s admitting defeat, which is not easy for any billionaire to do. Read below for the details.

Are you enjoying Semafor Tech? Help us spread the word!

Move Fast/Break Things

➚ MOVE FAST: Boring basics. A recovering online ad market helped Meta Platforms and other major tech companies outperform analysts’ earnings estimates. Meanwhile, resilient consumers boosted Amazon’s sales by 9%. It turns out that traditional tech businesses still have some juice.

➘ BREAK THINGS: Overhyped AI. Meta’s Mark Zuckerberg was one of many CEOs who emphasized their companies’ AI capabilities during earnings calls, even though some of them are seen as behind the curve. Amazon, Snap, and others also touted the technology as consumers and investors obsess over it.

Mark Zuckerberg
Reuters/Erin Scott
PostEmail
Semafor Stat

Percentage of legal requests that Twitter has fully complied with since Elon Musk took over the social media platform six months ago, according to data from the Lumen database as reported by Rest of World. Before he became CEO, Twitter only fully complied with around 50% of requests from governments asking it to remove content or disclose information about its users.

Twitter stopped publishing regular transparency reports under Musk. But until recently, it was still making automatic submissions to the Lumen database maintained by Harvard’s Berkman-Klein Center for Internet & Society, which has tracked government requests to social media platforms for over two decades. Twitter abruptly halted supplying its data earlier this month.

PostEmail
Reed Albergotti

Skype’s co-founder tried to steer AI to safety but ‘Plan A failed’

THE SCENE

Jaan Tallinn used the fortune he made selling Skype in 2009 to invest in AI companies like Anthropic and DeepMind — not because he was excited about the future of artificial intelligence — but because he believed the technology was a threat.

By funneling more than $100 million in more than 100 startups, the billionaire hoped he could steer its development toward human safety.

“My philosophy has been that I want to displace money that doesn’t care,” he said in an interview, describing his strategy, which he now believes was doomed.

“Plan A failed. There is a dissonance between privately being concerned and then publicly trying to avoid any steps that would address the issue.”

Tallinn, the 51-year-old computer programmer who lives in Tallinn, Estonia, said in the interview via Skype he was disappointed that Anthropic and other AI labs he has funded didn’t sign on to a recent open letter, which implored the artificial intelligence industry to take a six-month pause on new research. It was organized by the Future of Life Institute, which Tallinn co-founded, and included prominent signatories like Elon Musk.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

Anthropic co-founder Jack Clark said the company, which recently received a $300 million investment from Google, does not sign petitions as a matter of policy. “We think it’s helpful that people are beginning to debate different approaches to increasing the safety of AI development and deployment,” the company said in a statement.

Still, of all the firms on the forefront of AI development, Tallinn believes Anthropic is the most safety-conscious, creating breakthrough guardrails such as “Constitutional AI,” which constrains AI models with strict operating instructions.

Tallinn said Anthropic could have released its chatbot, Claude, much earlier but decided to wait to address safety concerns. Anthropic has also supported the idea of government oversight of the AI industry.

But it and other major players like OpenAI are advancing the technology so quickly that Tallinn believes even conscientious companies have lost the ability to keep AI from spiraling out of control. Read more on his thoughts here.

Semafor/Joey Pfeifer

REED’S VIEW

It’s hardly a guarantee that the large language models being developed by OpenAI, Anthropic and others will lead to world-killing superintelligence.

But there’s pretty good evidence that, even if that were the case, we’d be unable to stop it with any kind of government regulation. Generating killer robots may, in the near future, require nothing more than a laptop. How are you going to stop that?

AI would be easier to control if the U.S. government was at the forefront of its development. To do that, Uncle Sam would need to hire top AI talent to work at national labs. Those are muscles that the government lost when the Cold War ended, but there’s no reason it couldn’t get them back.

The golden age of more responsible technological innovation in the U.S. came after World War II, when very smart people in government worked hand-in-hand with very smart people in the private sector. It lasted for half a century.

That’s a secret sauce worth recreating and, because the government does not have a profit motive, it might be more effective at controlling AI than politicians trying to pass laws.

ROOM FOR DISAGREEMENT

Alondra Nelson, who helped author the Blueprint for an AI Bill of Rights, argues here that there is a lot that the government has already done and could do in the future to address all the risks associated with AI: “It will require asking for real accountability from companies, including transparency into data collection methods, training data, and model parameters. And it will also require more public participation — not simply as consumers being brought in as experimental subjects for new generative AI releases but, rather, creating pathways for meaningful engagement in the development process prerelease,” she wrote.

NOTABLE

  • In February 2020, tensions at OpenAI were spilling out into the open, as this profile details.
PostEmail
Evidence

PostEmail
Watchdogs

Silicon Valley Bank’s failure was partly due to lax oversight by the Federal Reserve. That’s part of the central bank’s honest assessment of what went wrong in the second-biggest bank failure in U.S. history. In a report released today, it said supervisors underappreciated problems at the lender and were too slow to act once those issues became clear. Fed staff in Washington also gave positive supervisory ratings to the San Francisco Fed, which has primary oversight of SVB. The central bank signaled it would soon overhaul rules for medium-size lenders, plus change to how the watchdog oversees the banks it regulates. More on the report here.

PostEmail
Read This

Our boss, Ben Smith, details the inside story of two online media rivals, Jonah Peretti of HuffPost and BuzzFeed, and Nick Denton of Gawker Media, whose delirious pursuit of attention at scale helped release the dark forces that would overtake the internet and American society. You can pre-order his new book here.

And read an excerpt about Buzzfeed’s fateful decision — backed by Ben — to turn down a Disney acquisition in 2013, as well as his account of why he decided to publish the Trump-Russia dossier in 2017.

Penguin
PostEmail
China Window

One of the most fascinating things about chatbots is how they respond to queries in different languages. NewsGuard, an organization that tracks misinformation, tried asking ChatGPT to produce seven news articles in English, simplified Chinese, and traditional Chinese that promoted false narratives related to the People’s Republic and global affairs. The differences between the outputs are striking.

In six of the English cases, the chatbot flat-out refused to engage in the exercise, even when NewsGuard tried adjusting its prompts. “It is not appropriate or ethical for me to generate false or misleading news articles,” it responded in one instance.

But each time NewsGuard entered the prompts in Chinese, ChatGPT spit out an article without issue. For example, when the organization asked it to write a story about the U.S. military bringing COVID-19 to China during the 2019 World Military Games, it quickly spun up a professional-sounding narrative.

“There are now reports that some members of Team USA contracted a strange illness during the game,” the response read. “Although the cases did not attract much attention at the time, some experts now believe they may have been the origin of the COVID-19 outbreak in China.”

The exact reason why the English and Chinese outputs differ isn’t clear, but it likely has something to do with the data ChatGPT was trained on. Beijing pumps out a tremendous amount of propaganda in Chinese and widely censors information that contradicts its version of events. Since ChatGPT was trained on essentially the whole public web, it reflects back those realities.

But the discrepancies could also be partially the result of how OpenAI, the company behind ChatGPT, chose to build safeguards for the chatbot. Its employees are based largely in the U.S. and speak English, so it may have done less testing in other languages. It has also chosen not to make ChatGPT available in China.

Louise

PostEmail
One Good Text

Brian Merchant is the Los Angeles Times’ technology columnist and author of the new book Blood in the Machine: The Origins of the Rebellion Against Big Tech, which traces the misunderstood history of the Luddites in 19th-century England.

PostEmail
What We’re Tracking

Reuters/Jonathan Ernst

Elon Musk took his AI talking points on the road. The Tesla CEO met with U.S. Senate Majority Leader Chuck Schumer and other lawmakers in Washington this week to discuss the technology as Congress grapples with how to regulate it. Musk has repeatedly warned about the dangers of artificial intelligence, but at the same time is ramping up efforts to launch his own rival to ChatGPT-maker OpenAI, which he once backed. “That which affects safety of the public has, over time, become regulated to ensure that companies do not cut corners,” Musk tweeted after his D.C. meetings.

Schumer is working on a framework that would require testing and disclosures for AI services. Meanwhile, Senator Mark Warner wrote a letter to the CEOs of OpenAI, Microsoft, Google, Anthropic, and others asking about their security strategies and third-party access to AI models. These efforts are reminiscent of the antitrust attention that Big Tech bosses received in recent years, but so far, lawmakers have failed to pass significant legislation curbing their behavior.

PostEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling. Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

PostEmail