• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG

Intelligence for the New World Economy

  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In this edition, why the US should be more focused on expanding AI domestically than slowing down Ch͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
December 10, 2025
semafor

Technology

Technology
Sign up for our free email briefings
 
Tech Today
A numbered map of the world.
  1. Put a ring on it
  2. Campaigning off China
  3. Waymo’s trust-building exercise
  4. Who should rule AI?
  5. AI privacy in focus

What exporting Nvidia’s H200s means for the future of American AI, and an AI tool could save cows from deadly respiratory disease.

PostEmail
First Word
If it ain’t broke, don’t fix it.

I’ve had many conversations this week with people who are angry about the Trump administration’s decision to send Nvidia H200s to China, which we scooped on Monday. If I had to sum their arguments up in a few words, it would be: If it ain’t broke, don’t fix it.

China is on the heels of the US in terms of AI capability, but US companies are gobbling up market share all over the world, building “AI factories” — as they like to call the massive compute clusters — to enable a new technology wave.

By withholding US chips from China, the US is forfeiting the opportunity to become the standard tech stack there. But nobody believes that’s even a remote possibility anyway.

Overall, things are going pretty well for American AI, and export controls have undoubtedly helped, even if you believe they pushed China to accelerate its inevitable technological independence.

It appears that part of President Donald Trump’s motivation to change the status quo is that he just doesn’t like the idea of US chip companies — which have been incredibly important for the country’s economy and national security — being held back from a lucrative market.

And people working for the president, as far as I can tell, genuinely believe there’s more upside than downside in sending H200s to China.

They wouldn’t say it this way, but I think there’s a worst-case scenario that it doesn’t actually do much to keep Chinese companies on the Nvidia stack, and it even helps China in some way. But the worst-case scenario is not catastrophic. The technology race with China is a long one, and it’s more about the US moving faster than making China move slower.

But there is an array of thoughtful views on this — including from Ben Thompson at Stratechery, Tim Culpan at Culpium, Dmitri Alperovitch on ChinaTalk’s podcast, and The Information.

PostEmail
1

A new AI ring that is finally simple, clean hardware

 
Reed Albergotti
Reed Albergotti
 
The Pebble Index ring.
Courtesy of Pebble

A lot of the silly AI hardware products that have launched and failed tried to do too much. Finally, someone has built a product that breaks new ground on the concept of simplicity. I haven’t tried the Pebble Index, a ring that can record audio snippets, but I want to. It costs $75 to preorder and $99 once it’s officially out, and you never have to charge it. When the battery runs out after “years,” you send it back to the company. That might sound annoying to some people, but I really don’t want another thing I have to charge. It’s a big reason I don’t wear an Apple Watch.

When you press the button on the ring, the audio is sent to the Pebble App, where local AI models can parse it and perform functions like set a reminder on your to-do list. That feature alone might save my marriage. There’s no cloud involved, no subscription, no personal messages going anywhere but your phone, so it doesn’t have the creepiness factor that the “always on” AI listening devices suffer from.

Eric Migicovsky, who sold Pebble to Fitbit a decade ago and recently bought back the trademark, has opened the whole thing up so that new features can be hacked by tinkerers. There are a lot of possibilities.

It’s a fair comment that this is a feature you could add to your phone. But if you’re like me, you just don’t. Apple’s walled garden makes it difficult to do much of anything without unlocking the phone and opening up another app. Sometimes, even that smallest amount of friction means your fleeting thought of adding a reminder is lost forever.

PostEmail
Semafor Exclusive
2

China’s AI threat overtakes politics

Josh Shapiro at a Semafor event.
Kris Tripplaar/Semafor

The global race to rapidly expand digital infrastructure, particularly against China, has become a focal point for some politicians heading into the midterms and the 2028 presidential election. When asked how to contend with the unpopularity of data centers at Semafor’s Powering America’s Future event Tuesday, Pennsylvania Gov. Josh Shapiro, a Democrat and floated presidential candidate, said: “AI is being developed. The real question is: Is it going to be developed here in the United States or in communist China? All of us should want it developed in the United States.”

The China threat is the same talking point Trump used in recent months to justify the rapid investment in and buildout of data centers. It suggests that arguments for slowed development — or outright NIMBYism — won’t be taken seriously by those at the top of politics, even as such stances gain traction with voters. “This is an issue that threatens our freedom if we allow China to overtake us on AI development,” Shapiro said, adding a caveat that the buildout should happen with environmental protections and public health in mind.

For more news and analysis on how America’s policymakers are dealing with AI, subscribe to Semafor’s DC briefing. →

PostEmail
3

Waymo’s revealing safety claim

Waymo cars lined up.
Peter DaSilva/Reuters

Waymo has tried to prove its autonomous drivers are statistically safer than human drivers, but a few recent deadly incidents involving pets have prompted it to attempt another trust-building strategy. In a blog post Tuesday, Waymo revealed a little more of its secret sauce to safety by describing its model architecture, including how it uses both fast- and slow-thinking AI to predict behaviors from “other road users” (ahem). It also details how various components of Waymo’s AI work together to make decisions it describes as “demonstrably safe.”

A Waymo spokesperson said the release wasn’t a response to bad press and instead, the company wanted to provide a deep dive for the AI community into how its technology differs from that of its competitors. The disclosure also serves to convince the public that safety is engineered into the core of Waymo’s product, not just its marketing.

A chart showing the number of crash injuries reported in cars per million miles driven.

Unlike other AI technologies, Waymo is in the business of both predicting the next best move for its user and anticipating the next move of everything surrounding it — an incredible technical feat. There are acceptable mistakes and unacceptable ones — Waymo has largely figured out how to tell them apart.

 Rachyl Jones

PostEmail
4

Trump turns up heat on Congress to regulate AI

President Donald Trump applauds at the “Winning the AI Race” Summit in Washington.
Kent Nishimura/Reuters

Trump’s tease of an executive order to ban state AI regulations is a clear message to Congress: if it won’t make policy, he will. And it’s a win for the tech companies that have lobbied for federal regulation in place of disparate state laws.

The AI moratorium divides Republicans — that’s why Trump didn’t get it in the defense bill — but the president’s unilateral move could prompt action. His stance is hugely conciliatory to tech leaders: a draft of the order threatened funding cuts to states with restrictive laws and allowed the Justice Department to sue states over their rules, according to Bloomberg. Meanwhile, “the reason why the states want to [legislate] is because Congress is slow,” Sen. Mike Rounds, R-S.D., told Semafor. State laws were always going to be supplanted by federal ones, and the pressure tactic intends to get there faster, before companies spend time and resources complying with each state’s rules rolling in.

What is likely to get lost here are safety concerns surrounding more ambiguous ways that AI is being applied, like insurance companies using it on claims and chatbots for mental health counseling — both can carry risks, but it is unlikely Congress reaches consensus on these topics in the near term.

PostEmail
5

How private is your AI, really?

The chat is off the record. A handful of startups are betting that encrypted AI models will attract business customers skittish about the security risks of chats, even as large AI companies offer private models. One of those startups, NEAR AI recently launched a platform hosting popular open-weight frameworks that encrypt all prompts and responses, preventing the model providers from viewing those chats, training on the material, turning them over in a court proceeding, or accidentally leaking them.

“Some people do not feel very comfortable connecting their email or giving computer access directly to the AI,” said Illia Polosukhin, NEAR co-founder and a co-author of Google’s foundational Attention Is All You Need paper. “We’re limiting how much we can use AI because of a lack of privacy.”

A chart showing Irish people’s concerns with privacy-related issues, based on a survey.

WhatsApp offers end-to-end encryption for some AI functions like writing assistance and message summaries. Apple’s AI runs on-device when tasks are small enough, and for larger computations, data processed through servers isn’t stored or viewed by the company, it says. NEAR’s software encrypts a user’s prompt locally when it sends, similar to how Signal or Threema work. Inside a secure environment, the Nvidia or Intel hardware that NEAR uses automatically decrypts the query, the model runs inference on it, and it is sent back in an encrypted form. It’s an interesting alternative that could attract users who want in on AI but don’t trust AI companies, as well as the blockchain and crypto firms that love all things encryption.

But it’s a hard sell for mainstream companies when other, more familiar privacy options are available, like running smaller models locally. Ultimately, even encrypted AI requires user trust. And in NEAR’s case, users must still rely on Nvidia’s and Intel’s computing technology to be designed to maintain confidentiality as advertised.

 Rachyl Jones

PostEmail
Artificial Flavor
A cow nuzzles its calf in a field near the village of Lukyanovka, Ukraine.
Ilya Naymushin/Reuters

Moo-ving in the right direction. University researchers are collaborating to build an AI-powered system intended to detect early signs of respiratory disease in cows, a leading cause of death that costs the cattle industry more than $1 billion per year. The system — which will be developed by Pennsylvania State University, the University of Kentucky, and the University of Delaware — consists of wearable sensors and robotic smart feeders that will observe how the cows breathe, eat, and their activity levels, Penn State said in a release. Deep learning AI will then track how the behaviors interact and determine what changes could help detect the disease in cows early.

Funded by the National Science Foundation, the project is an ambitious example of how AI, wearables, and robotics could be applied to improve an essential industry like farming. The researchers will need buy-in from farmers, who are increasingly tech-savvy when it comes to protecting their herd.

PostEmail
Semafor Spotlight
Semafor Spotlight

The News: The real estate developer brought more experience in corporate boardrooms than formal diplomacy to his role, but he’s got one big advantage: connection to Trump. →

PostEmail