Courtesy of Palo Alto Networks Nikesh Arora is CEO of cybersecurity firm Palo Alto Networks, and a former executive at Google and SoftBank. Reed Albergotti: A lot of tech people see [Trump’s reelection] as an opportunity to reform government. What should tech’s role be in this administration? Nikesh Arora: We have the FedRAMP process, which is designed to make sure things are tested and the tires are kicked. But over time, these things take longer and longer, and you know how quickly technology moves nowadays. Two years ago, everyone talked about ChatGPT, and today we’re talking about $300 million AI clusters. If we apply the traditional FedRAMP process to AI clusters, it won’t get approved for another three years. So the question is: Does that mean we don’t deploy AI across the government to make things efficient and faster? I think we do. How can we adapt the processes while keeping the principles alive of making sure it’s secure, making sure it’s manageable, yet deliver the benefits of technology to the government? I think you’re going to see a lot more of that. AI isn’t a huge problem in offensive hacking, at least until capabilities improve. But do you think we hit a point where AI just becomes too dangerous to make widely available? Right now, we’re just creating a smarter and smarter brain. Today, it speaks every language. It understands every language. It knows all the data in the world out there, but it’s still developing a sense of right versus wrong and good versus bad, because the internet is sans judgment. You can ask a question, and all these guardrails are being put in by model developers, but AI itself doesn’t have its own guardrails. Now, we’re all waiting for the big reveal of when it goes from just being able to tell you what it knows to being able to be smart enough to infer things it doesn’t know. Can AI be Albert Einstein? Not yet. Can it be Marie Curie? Not yet. But the moment AI starts building curiosity, that’s the next step. Then the question is, who’s going to put the guardrails on this brain, and who’s going to have access to the brain? That’s also more worrisome than where we are today, but not as scary as it could be. Now, let’s take the next step. In the case of [self-driving car company] Waymo, we let the scary brain take control. That’s the biggest fear. If you let AI take control, how do you know it’ll always come to the right thing? How do you know that Waymo won’t lock the doors and keep driving and take you to Sacramento just because I commanded it to? Those are the things we have to think hard about. How do you make sure when you get this super intelligence that it is only used for good, and who has access to it? Then the question is, when do we give control to super intelligence that is only used for good, and do we have the ability to manage it in such a way that we can sometimes at least have guidance control? I know security is a cat and mouse game. But what if you had an AI model that could think or reason, and it could write code, basically like Stuxnet [the malicious worm that targeted Iran’s nuclear program] with the ability to adapt and think once it’s in the system. How would you combat that? We’re already going in that direction. We’re trying to build models of what is normal behavior because all the Stuxnets of the world come and think they’re going to have to do something out of the norm to be able to breach us. And typically, there’s some abnormal behavior that happens. Like, when Nikesh logged in this morning, he tried to download five terabytes of data onto his personal server. That doesn’t sound normal. The problem is, today, we don’t have a good sense of what is normal, what is abnormal, and what should I do when it’s abnormal. There’s so much noise in the system that nobody actually has a clear sense of what is noise and what is signal. I’ll give you an example. A few years ago we had the SolarWinds incident. SolarWinds was a hack where a nation state decided, why bother hacking one company at a time? Let’s go hack a piece of hardware [and] everybody who has it will be fair game. Now, this piece of hardware technically sits in most companies. But we discovered through our user behavior analysis that this thing never talks to anything outside. But today it’s trying to, so we stopped it. We stopped a zero attack. And then we looked and said, ‘Wow, what’s going on here?’ So we actually called the vendor and said, ‘Guys, what happened here?’ They replied, ‘Nothing’s wrong, [it] must be in the infrastructure.’ We had to hire a third party to come in and do an investigation, and eventually found out that they had been hacked. That’s an example of how once you have clear signal, you can separate the noise from signal. With good signal, you can put remediation into place. |