• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG

Intelligence for the New World Economy

  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In this edition, where to draw the line with autonomous weapons, and Nvidia’s CEO takes up a differe͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
October 29, 2025
semafor

Technology

Technology
Sign up for our free email briefings
 
Tech Today
A numbered map of the world.
  1. Nvidia defends China talent
  2. AI agents get a DC voice
  3. HUMAIN’s AI playbook
  4. OpenAI breaks free
  5. Inside AWS’ Project Rainier
  6. Grokipedia’s misfires

Why autonomous weapons should be taken seriously, and Pat Gelsinger’s AI-ordained efforts to convert Silicon Valley to Christianity.

PostEmail
First Word
How far is too far?

Is it time for society to start having a serious discussion about autonomous weapons? With Presidents Donald Trump and Xi Jinping set to meet on Thursday, and Taiwan sure to be a topic of discussion, it’s clear that the US and China are both investing serious resources into building drones, missiles, and other ballistics capable of finding, following, and destroying targets — and killing humans.

I’ve been thinking about this since talking to John Dulin, co-founder of defense tech startup Modern Intelligence, who persuaded me that this debate is stuck in the past.

I pointed out to Dulin that my Chinese-made robot vacuum cleaner has practically enough technology in it to be used in an autonomous weapon (though not a very good one). Militaries have gone much further. The remaining question is where we place the guardrails — and at what point we give software the power to kill.

Most people’s reflexive answer to that question is “never,” and conventional wisdoms is that killer systems require a “human in the loop” — an Air Force officer or civilian, in a building in Virginia or Kyiv, pushing the fatal button.

The problem is that autonomous weapons are more of a continuum. For instance: Drones can be flown by a human remotely and then switched to autonomous mode once a target is identified in order to avoid jamming technology.

Going from today’s level of autonomy to scenarios where the AI finds the target on its own is mainly a software challenge, and one that is being tackled for civilian use in the private sector.

The hypotheticals are complicated: What about attacks moving too fast for humans? What if an adversary breaks the connection between the human and weapon? It’s worth a public conversation on hard questions like the standard for accuracy. Deploying a weapon with 99.9999% chance of hitting the intended target is a lot different than 90%. Where do you draw the line?

PostEmail
1

Nvidia’s CEO jumps in talent war debate

While the tug of war between the US and China over chips has been well covered, Nvidia CEO Jensen Huang raised another group caught in the crossfire that has received less attention in Washington: Chinese tech talent. At the company’s annual developers confab, held in the US capital for the first time this week, Huang mentioned several times that half of the world’s AI researchers come from China, which used to be a key recruitment pool for Silicon Valley.

A chart showing the countries of origin of top AI researchers at US institutions.

“Is it possible that the United States falls behind China? The answer is absolutely yes,” Huang said, citing the statistic as the reason why. “It is extremely important that the United States continue to be the country by which immigrants like myself want to come here to do our education, to stay and build our career and build our life.” It’s a view Huang has expressed before, but it’s one of the few areas where he differs with the White House. And it seemed like a pointed remark as he sat on stage next to Energy Secretary Chris Wright, who stressed that the Trump administration’s problems are with the Chinese government, not the Chinese people.

While Nvidia’s CEO prepared to travel to South Korea, where Trump and Xi were scheduled to meet Thursday to possibly reach a trade deal, there’s no easy solution to attracting global talent as parochial rhetoric rises in the US and countries boost incentives to keep the brightest brains at home. The number of Chinese students enrolled at US universities has fallen by more than 25% from its peak in the 2019-2020 school year, and reversing that trend will take more than just scrapping an export control backlist.

PostEmail
Semafor Exclusive
2

AI agents get new lobby group

Reed Albergotti (L) and Imbue’s Matt Boulos. Semafor/YouTube.

A new AI industry group that includes startups like Anthropic and established players like Intuit aims to educate lawmakers about one of the AI world’s buzziest topics: agents.

Despite 2025 earning the nickname “the year of AI agents,” members of the newly formed Agentic Futures Initiative believe lawmakers and policy officials need to better understand the technology to ensure new products remain interoperable, secure, and private. How that plays out will have major ramifications for how open the ecosystem is to competitors and upstarts.

“We just saw it as a massive void,” Ryan Dattilo, partner of Aquia Group, the lobbying firm organizing the effort, told Semafor. “We need to bring everyone together in a more organized fashion.”

Agentic AI essentially describes AI that can take autonomous action on a user’s behalf, but it’s often not so simple. In order for an AI agent to work, it needs to interface with different software. For instance, a simple task like booking an airline ticket could mean navigating a web browser and accessing multiple APIs. All of those interconnections create security and privacy vulnerabilities. And they also create opportunities for companies to erect anticompetitive “walled gardens,” such as app stores and operating systems that can lock in users and keep out competitors.

Matt Boulos, head of policy and safety for Imbue, a founding member of the organization, hopes to see agentic AI develop common protocols, much like the early web, that enable anyone — from tiny startups to massive tech companies — to innovate and add value to the ecosystem.

Listen to or watch the interview with Boulos on our YouTube channel.

PostEmail
3

HUMAIN takes aim at G42

The scale of Saudi Arabia’s AI ambitions came into sharp focus at the Future Investment Initiative in Riyadh this week. Public Investment Fund-backed AI company HUMAIN, which is less than six months old, is striking deals with some of the biggest names in energy, finance, and technology — part of a sweeping plan to position the kingdom as the third-largest AI infrastructure provider after the US and China.

A chart showing the estimated share of GDP from AI in 2030 for select countries.

Launched seven years behind Abu Dhabi’s AI holding company G42, HUMAIN is racing to catch up — announcing in months what it took years for G42 to do, buoyed by the booming AI ecosystem. The speed also captures the competitive nature of the two Gulf countries, which have abundant energy and capital to dedicate to making their nation the region’s AI kingpin.

Aramco will become a “significant” minority shareholder in HUMAIN, and the world’s biggest oil exporter will contribute its “AI assets, capabilities and talent into HUMAIN,” the company said in a statement. The PIF company also signed a $3 billion agreement with AirTrunk, backed by Blackstone, to build a large-scale data center campus in Saudi Arabia. And California’s Qualcomm, which plans to manufacture an AI chip to rival market-leader Nvidia, chose HUMAIN as its first customer.

For more details on the deals and takeaways from the conference, sign up for Semafor Gulf. →

PostEmail
4

OpenAI’s long haul to restructuring

Sam Altman.
Ken Cedeno/Reuters

OpenAI finally got Microsoft (and California’s attorney general) to go along with its restructuring. Now Microsoft really does own a big chunk of the company, instead of just a revenue sharing agreement. And OpenAI can go out and sell equity in a company that could actually one day go public.

This whole thing, which stems from OpenAI’s roots as a nonprofit research organization, has been awkward and a huge headache for OpenAI. And there are a lot of interesting wrinkles, like a vaguely defined committee that will decide whether OpenAI has developed a vaguely defined concept of “AGI.”

Before this transition, those who held “equity” in OpenAI were capped on how much they could profit from the company. Now that upside is endless.

PostEmail
Semafor Exclusive
5

The power behind AWS’ Project Rainier

A night view of AWS’ Project Rainier data center. Courtesy of AWS.

Anthropic’s Claude AI model is now running on 1 million of Amazon’s custom Trainium 2 AI processors, the company said Wednesday. Those processors are part of Amazon Web Services’ massive Project Rainier AI data center. Unlike most massive AI clusters, which are in one building or close together, Rainier is spread out across three states — Pennsylvania, Indiana, and Mississippi — according to a person familiar with the matter. The location in Pennsylvania has not previously been reported. The majority of chips are used for inference, and Anthropic uses a portion of the chips in evening hours for training runs. Those runs are paused during the day, when inference workloads increase.

It’s difficult to compare Rainier with other massive compute clusters like xAI’s Colossus because they use different processors with varying strengths and weaknesses. Rainier may be the world’s most powerful AI cluster and an engineering feat either way, requiring roughly a gigawatt of power and generating a potential 1,300 exaflops (a measure of performance of a supercomputer’s speed) of compute, according to Amazon distinguished engineer Ron Diamant, who spoke with Semafor earlier this week.

PostEmail
6

Musk attempts ‘anti-woke’ encyclopedia

Elon Musk.
Daniel Cole/Reuters

Elon Musk this week launched his “anti-woke” version of Wikipedia, called Grokipedia, promised to be a “massive improvement” over its incumbent. It comes after years of criticism from Musk that Wikipedia is part of the “woke mind virus,” unfairly conveying liberal viewpoints from legacy media sites. The new online encyclopedia, which suffered technical difficulties upon launch, uses xAI’s Grok large language model to pull information and supposedly prioritize objectivity. For example, Wikipedia’s introduction to its page on George Floyd focuses on his murder and the following protests. Grokipedia’s, meanwhile, is framed around Floyd’s criminal record and “riots causing billions in property damage.”

Other entries, which The Verge pointed out, are nearly identical to their Wikipedia counterparts, with some explicitly stating the content is adapted from the site under Creative Commons licensing laws — raising questions about how much of Wikipedia’s existing framework informed the creation of Grokipedia.

It also doesn’t eradicate legacy media citations, one of Musk’s key criticisms of Wikipedia. Grokipedia’s page about Semafor includes three references to The New York Times, which Musk often railed against for being left-leaning. While the page about Semafor is largely accurate, it includes a handful of typos, lists two founding dates, and mischaracterizes some employees’ positions.

Still, searches for “Grokipedia” were trending on Google Tuesday, with more than 100,000 searches since its launch, according to Google Trends.

PostEmail
Plug

More than 200,000 execs, founders, and innovators rely on Mindstream to cut through the AI noise. They deliver powerful insights, real-world applications, and breakthrough updates — all in a quick, daily read. No fluff. No cost. Just your unfair advantage in the AI era. Don’t fall behind — read Mindstream.

PostEmail
Artificial Flavor
Pat Gelsinger.
Ann Wang/Reuters

The Christianization of Silicon Valley. Former Intel CEO Patrick Gelsinger has worked most of this year as executive chairman and head of technology at Gloo, which is building chatbots and AI assistants for religious organizations. The initiative is part of a larger pursuit by Gelsinger to inject Christian values into Silicon Valley and Washington, DC: “My life mission has been [to] work on a piece of technology that would improve the quality of life of every human on the planet and hasten the coming of Christ’s return,” he told The Guardian.

With $164 million in total funding, Gloo develops AI products helping faith-based organizations connect with their congregations, reach potential new members, and automate administrative work. According to its website, it serves more than 300,000 churches, and 150,000 parachurch ministries and faith-based nonprofits. The technology has gained traction in conservative political circles, with some lawmakers expressing interest in using the company’s products at their churches, The Guardian reported. Its popularity comes alongside a rightward shift in Silicon Valley and the White House’s embrace of Christian nationalism.

PostEmail
Semafor Spotlight
Semafor Spotlight graphic

PostEmail