Matt Wood, vice president of product for Amazon Web Services, is at the tip of the spear of Amazon’s response in the escalating AI battle between the tech giants. Q: Microsoft and Google are both nipping at your heels by offering these huge AI models. How does AWS view this market? A: I have not seen this level of excitement and engagement from customers since the very earliest days of AWS. We have over 100,000 customers today that routinely use AWS to drive their machine-learning capabilities and these generative AI systems. One of the interesting differences with these generative models is that they make machine learning easier than ever before to use and apply. We built a capability that we call Bedrock, which provides the very easiest way for developers to build new experiences using this technology on AWS. You just provide a prompt, select which model you want to use, and we give you the answer. Where we kind of think of things a little differently is that it doesn’t seem that there’s going to be one model to rule them all. As a result, our approach is to take the very best, most promising, most interesting models and to operationalize them so customers can really use them in production. Customers can combine models from Amazon and from third parties in ways that are interesting and novel. Q: How many models are there now? A: On Bedrock, we have models from Amazon we call Titan. We provide models from Anthropic, AI21 Labs, which has great support for different languages. We’ve got models from Stability AI, and we’ll have more coming in the future. Q: So you’re basically curating the best models out there? A: Indeed. But there’s an old Amazon adage that these things are usually an “and” and not an “or.” So we’re doing both. It’s so early and it’s so exciting that new models are emerging from industry and academia virtually every single week. But some of them are super early and we don’t know what they’re good at yet. AmazonQ: Can you give me any customer examples that stand out? A: It’s super early and we’re still in limited preview with Bedrock. What has struck me is just the diversity and the breadth of the use cases that we’re seeing. A lot of folks are using these in the kind of unsexy but very important back end. So personalization, ranking, search and all those sorts of things. We’re seeing a lot of interest in expert systems. So chat and question-answer systems. But we’re also seeing a lot of work in decision-making support. So, decomposing and solving more complicated problems and then automating the solution using this combination of language models under the hood. Q: What’s the vision for who will best be able to take advantage of these products? Do you see a possibility that startups could basically just form a company around these APIs on Bedrock? A: There are going to be waves and waves of startups that have an idea or an automation that they want to bring into companies, or an entirely new product idea that’s enabled through this. An interesting area is larger enterprises that are very text heavy. So anywhere there is existing text is fertile soil for building these sorts of systems. And what’s super interesting is that we’re seeing a lot of interest from organizations in regulated fields that maybe traditionally don’t have the best reputation for leaning into or being forward-thinking in terms of cutting-edge technologies. Banking, finance, insurance, financial services, healthcare, life sciences, crop sciences. They are so rich in the perfect training data. Volumes and volumes of unstructured text, which is really just data represented in natural language. And what these models are incredibly capable at is distilling the knowledge and the representation of that knowledge in natural language, and then exposing it in all of these wonderful new ways that we’re seeing. For the full conversation, read here. |