• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at how tech leaders are preparing for a potential second term, including͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
July 3, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

After the debate last week that sent opponents of Donald Trump into a desperate panic, I made calls to see what the increasingly politically inclined tech elite planned to do. Sure, there were lots of group chats and dinners where people fantasized about replacing Biden with other candidates.

But it became pretty clear those efforts were just that: fantasies.

Yet, another interesting topic came up: What if we hit an AI inflection point during the next four years? What would that look like if it happens during another Trump term?

Such an event could look like a major economic disruption, or it could look like an emergency national security event. The scenarios depend on how bullish you are on the speed of AI development.

I started asking a lot of smart people, some of whom are quoted in the article below and some of whom spoke to me off the record.

What I heard was not a sense of doom about a Trump presidency. The tone was more one of prudent planning and preparation for something that may be inevitable.

Move Fast/Break Things
Hannah McKay/Reuters

➚ MOVE FAST: Double fault. Tennis fans have not taken to Catch Me Up, a new online feature on Wimbledon’s website that uses IBM’s AI technology to automatically generate player profiles and summarize matches. It often makes factual mistakes, and some people don’t like its use of US spellings for the British competition.

➘ BREAK THINGS: Single chance. YouTube will give creators 48 hours to remove videos that contain deepfakes of people’s faces or voices if their likeness is being used inappropriately. The new privacy violation policy will consider whether the content is parody, satire or is of public interest to see whether it deserves to be taken down.

PostEmail
Artificial Flavor
Jonah Green

We had the scoop yesterday on a new podcast that draws attention to the massive changes AI is about to bring. Journalist Evan Ratliff has spent six months tricking people into speaking to an AI clone of his voice. The stunt is the subject of Ratliff’s new six-episode series titled Shell Game.

When I called Ratliff’s cell phone to interview him about the podcast, I thought my AirPods had conked out again. I couldn’t hear anyone on the other end so I hung up. Then I called the number again. Same thing. Finally, Ratliff answered.

His voice sounded robotic and his words had that ChatGPT vibe that we’ve all gotten to know. It reminds me of the sanitized prose of an overly rehearsed politician. It also answered almost all my questions inaccurately, fabricating podcast episode titles and claiming to be powered by an older version of the technology.

It’s difficult to imagine anyone falling for this and thinking they were talking to the real Ratliff. But I assume they did. Otherwise, there’d be no podcast.

PostEmail
Reed Albergotti and Katyanna Quach

The AI Industry starts to focus on Donald Trump

THE NEWS

Tech leaders are starting to prepare for how a second presidential term for Donald Trump might change the trajectory of artificial intelligence and its impact on the world, including gaming out scenarios based on the ways the technology advances.

The next four years will be a pivotal time for the growth of AI, potentially reshaping economic priorities, and upending global rivalries and alliances. In the aftermath of President Biden’s poor debate performance, some companies are sketching out different playbooks and preparing memos on what to expect during a Trump presidency.

Now, companies used to a White House that has worked closely with AI firms on new safety guidelines for the nascent industry and cooperated with international partners may have to adjust to a deregulatory, America-first regime.

AI Wargaming
Tom Brenner/Reuters

Trump’s tendency toward unilateralism could result in consequences on an international level, said Helen Toner, former OpenAI board member and director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology.

“There have been some pretty interesting multilateral initiatives on AI over the past few years, from the G-7 Hiroshima Process to the Safety Summits to the UN,” she said. “It’s hard to imagine those continuing to have strong US participation under Trump.”

Biden led the country through the ChatGPT era, when the public woke up to the major advances in AI. The next president could lead the US through what may best be described as a transition in which the technology gains the capabilities necessary to replace humans in many roles. Depending on how fast that occurs, it could lead to economic challenges requiring fast and decisive policymaking.

One way to predict the way Trump might respond would be to look at his response during the early months of the pandemic, said one prominent Silicon Valley venture capitalist. “You have a president who did very little to shore up people’s livelihoods during Covid,” he said.

On the other hand, he noted Trump’s Operation Warp Speed, the emergency effort to produce vaccines in record time, was a massive success.

Stefan Weitz, a former Microsoft executive on the founding team behind its Bing search platform and CEO and co-founder of HumanX, a company launching a new AI conference, told Semafor: “I think any company that’s not wargaming out both options are probably doing themselves a disservice.”

Reed's view on where Trump has support of the AI industry.  →

PostEmail
Live Journalism

Join Semafor on July 10th in Washington D.C. for an in-depth discussion on fostering a regulatory environment that supports innovation while ensuring financial stability and security with policymakers and industry leaders.

RSVP for in-person or livestream access here.

PostEmail
Semafor Stat

The percentage increase of greenhouse gases emitted by Google over the last five years due to building more data centers to power AI, according to its annual environmental report. That figure is only increasing as the technology continues to expand, and could threaten the company’s goals to slash carbon emissions to zero by 2030.

PostEmail
Intel
Carlos Barria/Reuters

Semafor had the scoop earlier today on OpenAI joining BSA | The Software Alliance, a lobbying group representing the tech industry, as Sam Altman’s company seeks to expand its influence in Washington, DC.

The group announced that OpenAI will become a global member, adding to a roster that includes OpenAI commercial partner Microsoft, Zoom, Oracle, and Salesforce, along with AI startups like Cohere.

BSA has been a leading industry voice as global regulators, including those in the Biden administration, grapple with how to regulate artificial intelligence. The group has pushed for what it describes as the responsible development of AI and advocates for federal rules in light of the hundreds of proposals that have cropped up in state legislatures.

Altman has made multiple trips to Washington to meet with lawmakers and government officials since ChatGPT was launched in November 2022. Earlier this year, OpenAI hired veteran Democratic operative Chris Lehane as vice president of public works.

PostEmail
Obsessions

Anthropic is looking to fund external projects developing new AI tests to effectively assess a model’s safety risks and capabilities. Measuring and comparing the abilities of different systems is difficult. The outputs generated by large language models are unpredictable, and a model won’t always respond in the same way for a given prompt. The same tests have to be performed thousands of times to determine some behavior or ability is inherent and not down to chance.

Companies like Anthropic employ people to probe the models, but it’s difficult to scale up this process known as “red teaming” in a systematic way. Crowdsource workers may suffice when analyzing toxic or biased behaviors, but experts are needed to evaluate more complex risks like cybersecurity or bioweapons.

Examining the quality of LLM responses is often subjective, too, and it’s tricky to tell if they can reason or have simply memorized their training data.

But it’s becoming increasingly important to accurately assess AI as the technology grows more powerful. Anthropic wants to test for new capabilities, like a model’s ability to self-replicate or manipulate humans, for example. If the projects are fruitful, it’ll help the startup better analyze the risks of its own models, and figure out what safety metrics to improve, potentially giving it an advantage over its rivals. It’ll also be able to use those results to inform policymakers and shape regulation.

— Katyanna Quach

PostEmail
Hot on Semafor
PostEmail