• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


DeepSeek’s hidden security risks foreshadow AI’s future

Feb 7, 2025, 12:46pm EST
techNorth America
Facebook Chief Security Officer Alex Stamos talks about the Internet Defense Prize as he gives a keynote address during the Black Hat information security conference in Las Vegas, Nevada, US July 26, 2017.
Steve Marcus/Reuters
PostEmailWhatsapp
Title icon

The Scene

The hype around DeepSeek overshadowed a bigger debate over Chinese AI that is only just beginning: Could models made in China make the US more vulnerable to cybersecurity attacks?

When DeepSeek released its most advanced AI model, called R1, last month, it was hailed for its advanced capabilities relative to its size, and was cheaper to run than many rivals made by big tech companies. And it’s open source, making it free for anyone to use.

At the same time, security experts were probing the model and finding fewer built-in protections meant to prevent the software from being misused. Palo Alto Networks said it found that R1 was particularly susceptible to three techniques aimed at “jailbreaking” attacks that essentially render AI models defenseless against anyone trying to control them.

AD

Much of the concern has centered on DeepSeek’s mobile app, which shot to the top of the Apple App Store rankings and remained there for over a week. And while the app notably gathers large amounts of data about users and sends it to China, the risks it poses are more or less understood and out in the open.

What’s less known is whether the open-source model weights could pose risks even outside of the mobile app. Anyone can download it and, with a reasonably powerful consumer computer, run it locally. What they do with it after that determines the potential risks. If the model ever connects to the internet or the outside world, the risk is very low. If it’s granted access to data and given expanded capabilities, the possibility for problems increases.

Alex Stamos, the chief information security officer for SentinelOne — who held a similar post at Facebook — said the risks posed by the model weights are more theoretical, and would arise if companies eventually chose to expand the uses of AI models, giving them more power to control computer systems — what’s known as “agentic” AI.

AD

“There will be future risks, for sure, because the way these models work are going to have to change for what people want,” he said.

Title icon

Step Back

Open-source AI models have become favorite tools of many companies because of their relatively low cost and the ability to run them on servers — either in the cloud or situated locally — controlled by the firms using them.

Platforms like Meta’s Llama family or those made by French company Mistral have become ubiquitous in corporate America, powering company chatbots that draw on proprietary data to give employees and customers new insights and answers. With the widespread popularity of R1, it’s possible DeepSeek will join those ranks.

Within days of R1’s release, cloud providers began making it available to customers who might want to use it to power AI tools.

Title icon

Know More

On future risks when AI agents become the norm, Stamos cited OpenAI’s new Deep Research tool that can access the open internet, run autonomously for relatively long periods of time and compile information to form the basis of thorough research reports.

AD

Eventually, there could be a DeepSeek equivalent. “So in the future, yes, it could be more dangerous,” he said. “Right now all this thing can do is export output text, but that’s not going to be the future. The future will be that you’ll be able to download some kind of file format that has all of these built-in agentic capabilities to read your screen, to listen to your microphone, to fetch things from the web. And in that model, clearly it is going to be much more concerning.”

One possible vulnerability in AI models is a prompt injection attack, where a model is given a prompt that causes it to do something totally unexpected or against its own rules. There are other creative ideas for how hackers might accomplish these attacks. For instance, if an LLM is able to retrieve information from a database, an attacker could try to inject malicious prompts into the data. It could be as simple as sending an email to someone with a string of text that gets ingested into an LLM and causes it to start divulging confidential information.

While those kinds of attacks have not yet become a real-world problem, they could be around the corner, and if open-source Chinese AI models gain widespread adoption in corporate America, it could lead to concern among lawmakers and national security officials.

Similar concerns about Huawei’s 5G network equipment led to it being banned in the US and other countries.

Already, lawmakers like Senator Josh Hawley have called for legislation that would penalize Americans with prison time if they use open-source AI models from China.

From a security perspective, the coming AI landscape could create a Wild West situation reminiscent of the 1990s dot-com era or the early mobile web.

“People would say in the 1990s, ‘I have built a secure web application.’ In 1999, that was an impossible statement to make,” said Stamos, who was part of early hacker culture and was associated with the famed group Cult of the Dead Cow. “People would say in 2008, ‘I built a secure mobile app.’ That was an impossible statement to make.”

Stamos, a computer science lecturer at Stanford University, says the current AI moment reminds him of the early internet days, when the cybersecurity landscape was a blank canvas. “I tell my students it is an incredibly exciting time for you to be alive, because we have no idea what the vulnerabilities are in these systems. Anybody who says they have built a totally secure AI system is lying to you.”

Title icon

Reed’s view

It’s likely US companies will be wary of employing DeepSeek AI models at scale. There’s no evidence that there are backdoors built into the model weights — if such a thing is really even possible — but the risks outweigh the rewards. DeepSeek’s models aren’t so far ahead that companies are missing out by not using them.

But if China’s open-source models surge ahead of US models, it will create a very tricky situation. While it might be possible to ban mobile apps, the idea of banning open-source software seems ludicrous if not impossible.

Even today, open-source AI models are becoming an amalgamation of parts, distilled from one source and built atop another. Even assigning a nationality may soon become difficult.

There’s only one way to stop China’s open-source models (and all the accompanying security risks) from gaining a foothold in the US and the global market. That’s by making better open-source software in the US.

In that regard, Meta is a US open-source AI national champion of sorts. But who knows how long it can afford to give away software that requires increasing billions to train.

The real answer is in academia. There are brilliant researchers inside computer science departments who are starved of resources.

Earlier this week, Stanford researchers announced they had created a model that approaches the capabilities of one of OpenAI’s foundation models. The cost to train it? About $50. Imagine what they could do with more GPUs.

Title icon

Room for Disagreement

Hawley, in introducing legislation effectively banning DeepSeek and Chinese AI models, believes a complete decoupling from China on AI is possible. No AI technology could flow between the two countries.

In a statement, Hawley said: “Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States. “America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation.”

Title icon

Notable

  • This prescient Center for Strategic and International Studies paper from August makes a compelling comparison between open-source AI and the defense contracting space, arguing that a flourishing open-source ecosystem creates competition and diversification that will help the defense industry in AI procurement.
AD
AD