The Scoop
Customers have asked to run OpenAI models on non-Microsoft cloud services or on their own local servers, but OpenAI has no immediate plans to offer such options, according to people familiar with the matter.
That means there’s one area where rivals of the ChatGPT creator have an edge: flexibility.
To use OpenAI’s technology, paying customers have two choices: They can go directly through OpenAI or through investment partner Microsoft, which has inked a deal to be the exclusive cloud service for OpenAI.
Microsoft will not allow OpenAI’s models to be available on other cloud providers, according to a person briefed on the matter. Some companies that exclusively use rivals, such as Amazon Web Services, Google Cloud or Oracle, choose to use other AI models rather than switch cloud providers or send data directly to OpenAI.
But Microsoft would allow OpenAI models to be offered “on premises” in which customers build their own servers. Creating such solutions would pose some challenges, particularly around OpenAI’s intellectual property. But it is technically feasible, this person said.
Spokespeople from OpenAI and Microsoft declined to comment.
Right now, every time customers access OpenAI models, they are sending data to Sam Altman’s company or Microsoft. They are also paying for the cloud computing costs for each query.
For companies with sensitive data or that can’t send data to the cloud because of regulatory reasons, OpenAI isn’t an option. And companies that would rather build their own servers instead of paying cloud computing costs might opt for alternatives.
It’s unclear how many customers fall into that bucket. Some companies, like Morgan Stanley, send data directly to OpenAI, which does not use it to train its AI models.
Microsoft does offer a “hybrid cloud” option to customers, where companies can take advantage of the cloud while storing sensitive data on local servers. But even with that solution, companies would still be required to send some data to Microsoft in order to utilize OpenAI models.
Know More
The pressure on OpenAI intensified after last week, when Meta released a competing AI model called Llama 2 that is distinct from OpenAI in one major way: It’s open source. That means companies, individuals and researchers can use it any way they wish, including by fine-tuning it for their own purposes, something that is not possible with OpenAI’s GPT-4 model. Llama 2 can also be downsized so it can run on low-power devices, which could cut costs and increase speed.
OpenAI has released its own open source models in the past and it may release more in the future. But when it comes to its top-tier model, GPT-4, open source is not an option, Altman has said.
OpenAI argues that open-sourcing powerful foundation models would be dangerous because they could be misused. It would also almost certainly lead to a loss of valuable trade secrets for OpenAI, allowing competitors to catch up and potential customers to use the technology for free.
Reed’s view
The AI landscape is increasingly competitive. Claude 2, a competing model developed by Anthropic, a company founded by former OpenAI employees, is available on multiple cloud providers, including AWS, the industry leader. By some measures, Claude 2 is a higher performing model.
And rivals are becoming widely available. AWS, Google and Oracle are intent on offering customers a plethora of AI options, both open source and closed source.
The ace up OpenAI’s sleeve is that it may be able race ahead of competitors. GPT-4, the most advanced model available to consumers, was released to the public in March, but it was in use over a year ago, a person close to the company said. And GPT-4, this person said, is not the most advanced technology OpenAI has developed.
The idea is that future models from OpenAI will leapfrog competitors, thanks in large part to ever-growing datasets that improve accuracy and sophistication, this person said.
But even assuming OpenAI can keep improving its models and stay ahead of the competition, there’s still a question of how much better they are.
There are a myriad of benchmarks that assess the abilities of these foundational models and rank them — such as how well they do on medical and legal exams. But the real question is whether the average consumer can tell the difference.
If they’re all good enough, then businesses will gravitate toward the cheapest and most convenient options and some will go for the most secure or private. OpenAI wouldn’t be the top choice by any of those measures.
But if large language models like GPT are about to see an exponential increase in abilities that’s noticeable to consumers, then OpenAI will still be the industry leader, regardless of whether it’s only available on some cloud providers.
Exponential improvements are not a given. Some AI researchers believe that in order to make big advances beyond where we are today, a whole new architecture is needed. There’s no guarantee OpenAI will make such a discovery.
Room for Disagreement
OpenAI can afford to be less flexible because of its first-mover advantage. That, and the moat it’s created by gathering more data from the widespread use of ChatGPT, will keep it ahead of competitors, this article argues. OpenAI has made its name synonymous with advanced AI and chatbots, and competitors will have a hard time doing the same.
The View From Europe
I spoke with a European venture capitalist Tuesday who believes we haven’t even begun to see the competition in the market yet.
From the U.S., Europe looks hindered by the looming threat of ham-handed regulation. But those laws haven’t passed yet and fears may be overly dramatic.
European AI companies hope to take advantage of an influx of Chinese AI researchers who want to leave the homeland but are scared off by the geopolitical implications of working in the U.S.
Notable
- The gap in performance between OpenAI’s GPT-4 model and free, open-source solutions is narrowing, The Information reported in March, which gives OpenAI two options: Become more flexible or get better.