THE SCENE Venture capitalist Vinod Khosla made one of the shrewdest bets of his career in 2019, when he put $50 million into OpenAI, now the darling of the generative AI boom. In a sign of just how fast that industry is moving, Khosla recently made another big investment, this time in a company built around the belief that OpenAI has got it all wrong. Symbolica AI, co-founded by former Tesla Autopilot engineer George Morgan, is building a new AI-assisted coding tool that it says uses a new method of machine learning that works completely differently from the cutting edge foundation models made by OpenAI, Google, and other major AI companies. With this new approach, Morgan says Symbolica’s models won’t require the same massive, power-hungry compute that companies are now spending tens of billions of dollars to procure for the most advanced AI models. In an interview with Semafor, Morgan said those investments are based on speculation that, if given enough data and enough compute resources, later versions of these models will become more intelligent. Without mathematical proof showing how these things work, Morgan says the process is more like alchemy. “That’s exactly what AI models are today,” he said. “You mix a bunch of random stuff together, you test it, you see if it does the thing or not. If it doesn’t, you try something else. What Symbolic is doing is bringing this into the era of chemistry.” Steve Jennings/Getty Images for TechCrunchREED’S VIEW I haven’t seen Symbolica’s technology yet, so I can’t vouch for its capabilities. But tech leaders have told me that the next big breakthrough in AI is likely removing humans from the architecture step of its development. There is a big market for what Symbolica is trying to build. While consumers may want to chat with AIs about any topic, businesses want the opposite, with language interfaces that are narrowly focused and totally reliable. Hallucination, in many enterprise contexts, is simply unacceptable. In the near term, the large transformer models will get bigger and more capable. According to people who have used the still under wraps GPT-5, the next generation of OpenAI’s technology, it is much closer to reasoning abilities than GPT-4. It still hallucinates and is definitely not AGI, but it sounds like it is good enough that it will be more useful to a wider swath of customers. Ten years from now, we may look back and realize that scaling transformer-based models only got us so far, and that new methods like Symbolica’s represented the path to AI with reasoning capabilities. Even if that’s the case, today’s foundation models will have played a critical role. ChatGPT and the resulting AI craze has inspired a wave of investment and talent pouring into the field. How category theory and interpretability play a role in Symbolica's technology. → |
|