• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In this edition, we talked to Palo Alto Networks CEO Nikesh Arora about why the next administration ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
November 20, 2024
semafor

Technology

technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

It’s looking more and more likely that the US will build a “Manhattan Project” for AI, an idea we’ve floated several times in this newsletter well before the election. How likely that is will largely depend on the incoming administration.

The latest evidence: In a report published Tuesday, the United States–China Economic and Security Review Commission, a congressional advisory body, recommended lawmakers create a “Manhattan Project-like program” aimed at “racing to and acquiring an Artificial General Intelligence (AGI) capability.”

This is, essentially, a suggestion. But it’s not the only example. Over the summer, the America First Policy Institute, headed by ex-Donald Trump official Larry Kudlow, floated the idea, and similar concepts have popped up within Project 2025. And former OpenAI employee Leopold Aschenbrenner wrote in a widely read paper in June that “by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence.”

I’ve also had conversations with executives at AI companies that have suggested some kind of “nationalization” is possible.

What isn’t spelled out in these proposals is how, exactly, the US government is going to become the world’s leading developer of AI. Right now, achieving “AGI” (the definition of which is fuzzy) is the stated goal of companies like Meta, OpenAI, Anthropic and others.

A public-private scenario would put all eyes on xAI, the company founded by Elon Musk. Trump traveled to Texas Tuesday to watch a rocket launch by Musk-founded SpaceX.

Musk got a slow start in the AI race and is trailing OpenAI, Anthropic, Google and others. But he has built what may be the largest cluster of AI chips in the world at a data center in Memphis.

If the incoming president decides to build a Manhattan Project for AI, the odds are pretty good that Musk is going to be centrally involved.

The government could also go it alone and try to recruit the best researchers to work on this project. But eventually, these efforts would find their way to the private sector in some form, with perhaps the most advanced AI remaining classified. Regardless, the next four years will see government play a bigger role in the tech industry than it has in the past.

For more on this and other important topics, read my interview with Palo Alto Networks CEO Nikesh Arora, who sees the industry from every angle.

Move Fast/Break Things
Sundar Pichai, Chief Executive Officer, Alphabet; Chief Executive Officer, Google, USA, speaking in the An Insight, An Idea with Sundar Pichai session at the World Economic Forum Annual Meeting 2020
World Economic Forum/Flickr

➚ MOVE FAST: Google. The company’s investment in Anthropic was cleared by UK antitrust regulators, who made a similar decision for Amazon’s partnership with the AI startup. Google’s ties to Anthropic didn’t qualify as merger control under UK rules.

➘ BREAK THINGS: (Also) Google. The search giant hasn’t been so lucky in its home country. The Justice Department is expected on Wednesday to ask a judge to order that Google divest Chrome or its Android operating system, unless it curbs its dominance in online search.

PostEmail
Artificial Flavor
A Pokemon store at Tokyo Station
Wikimedia Commons

Companies sitting on mountains of user data essentially hold the gold that AI models need for development. Niantic, the company behind Pokémon Go, is creating an AI model that can navigate the real world, using the geospatial data it has collected from millions of Pokémon hunters, it said in a blog post last week.

What it calls a “Large Geospatial Model” will predict what physical structures like churches and statues look like based on what it knows about existing ones, similar to how a large language model can produce text after digesting mounds of written material. The company said it will aid in the development and use of AR glasses, robotics, wearables, and autonomous technologies, though others have theorized about darker use cases.

The coming model — arguably a smart use of Niantic’s data reserves — is also a reminder that free is never free in tech. It echoes the likes of Meta and Microsoft, which have pulled from their own data stockpiles to train their generative models. Play on, or don’t.

PostEmail
Q&A
Palo Alto Networks’ CEO Nikesh Arora
Courtesy of Palo Alto Networks

Nikesh Arora is CEO of cybersecurity firm Palo Alto Networks, and a former executive at Google and SoftBank.

Reed Albergotti: A lot of tech people see [Trump’s reelection] as an opportunity to reform government. What should tech’s role be in this administration?

Nikesh Arora: We have the FedRAMP process, which is designed to make sure things are tested and the tires are kicked. But over time, these things take longer and longer, and you know how quickly technology moves nowadays. Two years ago, everyone talked about ChatGPT, and today we’re talking about $300 million AI clusters.

If we apply the traditional FedRAMP process to AI clusters, it won’t get approved for another three years. So the question is: Does that mean we don’t deploy AI across the government to make things efficient and faster? I think we do. How can we adapt the processes while keeping the principles alive of making sure it’s secure, making sure it’s manageable, yet deliver the benefits of technology to the government? I think you’re going to see a lot more of that.

AI isn’t a huge problem in offensive hacking, at least until capabilities improve. But do you think we hit a point where AI just becomes too dangerous to make widely available?

Right now, we’re just creating a smarter and smarter brain. Today, it speaks every language. It understands every language. It knows all the data in the world out there, but it’s still developing a sense of right versus wrong and good versus bad, because the internet is sans judgment. You can ask a question, and all these guardrails are being put in by model developers, but AI itself doesn’t have its own guardrails. Now, we’re all waiting for the big reveal of when it goes from just being able to tell you what it knows to being able to be smart enough to infer things it doesn’t know. Can AI be Albert Einstein? Not yet. Can it be Marie Curie? Not yet. But the moment AI starts building curiosity, that’s the next step.

Then the question is, who’s going to put the guardrails on this brain, and who’s going to have access to the brain? That’s also more worrisome than where we are today, but not as scary as it could be. Now, let’s take the next step.

In the case of [self-driving car company] Waymo, we let the scary brain take control. That’s the biggest fear. If you let AI take control, how do you know it’ll always come to the right thing? How do you know that Waymo won’t lock the doors and keep driving and take you to Sacramento just because I commanded it to? Those are the things we have to think hard about. How do you make sure when you get this super intelligence that it is only used for good, and who has access to it? Then the question is, when do we give control to super intelligence that is only used for good, and do we have the ability to manage it in such a way that we can sometimes at least have guidance control?

I know security is a cat and mouse game. But what if you had an AI model that could think or reason, and it could write code, basically like Stuxnet [the malicious worm that targeted Iran’s nuclear program] with the ability to adapt and think once it’s in the system. How would you combat that?

We’re already going in that direction. We’re trying to build models of what is normal behavior because all the Stuxnets of the world come and think they’re going to have to do something out of the norm to be able to breach us.

And typically, there’s some abnormal behavior that happens. Like, when Nikesh logged in this morning, he tried to download five terabytes of data onto his personal server. That doesn’t sound normal. The problem is, today, we don’t have a good sense of what is normal, what is abnormal, and what should I do when it’s abnormal. There’s so much noise in the system that nobody actually has a clear sense of what is noise and what is signal.

I’ll give you an example. A few years ago we had the SolarWinds incident.

SolarWinds was a hack where a nation state decided, why bother hacking one company at a time? Let’s go hack a piece of hardware [and] everybody who has it will be fair game. Now, this piece of hardware technically sits in most companies. But we discovered through our user behavior analysis that this thing never talks to anything outside. But today it’s trying to, so we stopped it.

We stopped a zero attack. And then we looked and said, ‘Wow, what’s going on here?’ So we actually called the vendor and said, ‘Guys, what happened here?’ They replied, ‘Nothing’s wrong, [it] must be in the infrastructure.’

We had to hire a third party to come in and do an investigation, and eventually found out that they had been hacked. That’s an example of how once you have clear signal, you can separate the noise from signal. With good signal, you can put remediation into place.

Read on to find out why Arora thinks agentic AI is “the most fascinating thing ever.” →

PostEmail
Semafor Stat
$61 billion

The valuation Databricks is looking for in its latest funding round, a 42% increase from last year. The cloud company is trying to raise between $7 billion and $9 billion to cash out employees’ stock grants and cover the related taxes, The Information reports. If successful, it would surpass this year’s funding raised by Stripe, and approach that of OpenAI.

PostEmail
Quotable
“If Europe doesn’t do this right, it will become a very small continent abandoned for a few generations.”

— Tech investor Xavier Niel, in an interview with the FT, on the dangers for the region if it misfires in AI innovation. He has backed French startup Mistral, among other investments.

PostEmail
Ahem
A chart showing the change in share prices of Intuit and H&R Block over the past day

Elon Musk’s early push to streamline the government is already hitting Intuit and H&R Block, whose shares dropped on chatter of a free tax-filing app, floated by Musk’s new Department of Government Efficiency. Intuit’s shares fell 5% while H&R Block’s stock dropped 8% Tuesday, marking its worst performance since 2020.

“Crazy idea: let’s simplify the tax code,” Musk posted on X (I flagged that the IRS was an obvious target last week but, ahem, it got cut).

If launched, the app would come on the heels of work from the Biden administration to make filings more accessible. Earlier this year, the IRS rolled out free software to residents of 12 states as part of a larger invite to onboard the entire nation by the 2025 tax season.

Easier filings would be a worthwhile investment in the eyes of this newsletter, since the IRS largely leaves individuals to figure out how tax laws apply for themselves (and many other wealthy countries don’t face the same filing burden that Americans do). But it might also handicap firms like TurboTax owner Intuit, which spent decades lobbying against government attempts to offer free and easy filings.

PostEmail
Semafor Spotlight
A graphic saying “A great read from Semafor Principals”US President-elect Donald Trump
Bryan Snyder/Reuters

Donald Trump may be interested in using steep new tariffs to pay for tax cuts next year, but Republican lawmakers are far from sold, Semafor’s Burgess Everett and Kadia Goba reported.

“I don’t like tariffs, Number One. I think the consumer pays them. So they’re regressive. They’re a sales tax, basically,” Kentucky Sen. Rand Paul told Semafor.

For more on the Trump transition, subscribe to Semafor’s daily Principals newsletter. →

PostEmail