• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, Andrew Bosworth, known as Boz in the industry, talks to Semafor about how AI is ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
December 20, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Louise Matsakis
Louise Matsakis

Hi, and welcome back to Semafor Tech. Before 2023 wraps up, we have one more interview for you, and it’s a big one. While on vacation in Australia, Reed got the opportunity to speak with Meta Chief Technology Officer Andrew Bosworth, better known as Boz.

In their full conversation, Boz discusses Meta’s philosophy on open source, its strategy for recruiting top AI talent, and why he thinks digital privacy laws passed in Texas and Illinois are “bad.” I was surprised by Boz’s candor, and think the interview is well worth reading, maybe while you enjoy a few holiday cookies or other treats.

Plus, a disturbing data discovery, Google Gemini’s identity crisis, and our executive editor Gina Chua conducts an experiment with chatbots.

Move Fast/Break Things

Reuters/Florence Lo

➚ MOVE FAST: Shake up. Alibaba is making more changes to its management structure as upstarts like Temu threaten the e-commerce giant’s dominance. CEO Eddie Wu is consolidating control over the Chinese firm’s shopping platforms, which could inject some much-needed urgency amid increased competition for Taobao and Tmall.

➘ BREAK THINGS: Shake down. A pandora’s box of potential patent trolls has been closed, for now. The U.K. Supreme Court today rejected an effort by an American computer scientist to register patents that he says were created by his AI system. Like courts in the U.S., Australia, and the EU, U.K. judges said an inventor must be a human being.

PostEmail
Artificial Flavor
Semafor/Al Lucca

Less is more. There’s no shortage of examples of how the marriage between generative AI and journalism can go horribly wrong; but how can it go right? The trick, writes Semafor’s Gina Chua, may be to make AI know less about the world. Large language models hallucinate because they’re turning out text based on the billions of words and sentences they’ve scooped up from the web, much of which are not particularly accurate. But if you can keep AI from tapping into that wealth of information and focus only on a core set of documents, the output is a lot more useful.

Chua spent a weekend building a handful of custom bots — one, intended to help journalists tap into a news organization’s accumulated knowledge; another, to bring specialist expertise to resource-strapped newsrooms on deadline; and a third, to experiment with a new, more conversational way of bringing journalism to audiences. The results weren’t perfect — but they weren’t bad. And they point to some paths worth exploring.

Check out her experience here. →

PostEmail
Q&A

Andrew Bosworth is Meta’s Chief Technology Officer. He talked to Reed about how the generative AI craze has shifted Facebook parent’s strategy across product lines, including its mixed-reality efforts, helping to usher in a new generation of Quest headsets and Ray Ban smart glasses.

Q: You met Mark Zuckerberg when you were teaching at Harvard. Does it feel like everything is coming full circle now, as generative AI reshapes Meta?

A: [Meta Chief Product Officer] Chris Cox and I both studied AI in unusual ways. He was part of the symbolic systems program at Stanford. I was part of this mind-brain and behavior program at Harvard — computational neurobiology. So Chris and I both had this kind of interesting, very interdisciplinary approach to AI as undergraduates. Having said that, when I was teaching AI as an undergraduate [teaching fellow], we taught that neural networks were once promising, but then known to be useless technology. That was 2004. And, of course, neural networks run the world now.

So let’s take it with a grain of salt. I love an origin story as much as the next person, but I don’t want to get too ahead of myself. When I came to Facebook, Mark told his recruiter, ‘I need someone who knows AI’ and she said, ‘I know this one guy.’ So I came in and built probably the first artificial intelligence that we shipped here, which were all heuristic-based systems for ranking News Feed. Chris Cox was my daily partner. I did the back-end and ranking and he did the front-end design work.

So, AI has always been a major component of our thinking. But like all technologies, you’re in these S curves. Before generative AI really hit the scene, you really were on this big flat part of an S curve and there for a long time, just grinding out gains, like a quarter percent or half a percent year-over-year. We’ve known for three or four years that these generative AI, large language models were a big deal, but they still felt like infrastructure. That’s why I think the OpenAI introduction of ChatGPT is so interesting.

I’m not trying to say we were prescient. We had been investing and we believed in this technology. For all of that belief, it still snuck up on us. It was like, ‘Oh my god, it’s here. We’ve got to do things differently. We’ve got to change it up.’

Q: I appreciate the honesty. It does seem like when you take mixed-reality hardware and combine it with generative AI, you get something bigger than the sum of its parts. Do you see it that way?

A: If you go back to Michael Abrash’s talks at Connect — these big, what does the future of augmented reality look like talks — artificial intelligence was always a part of the vision, but we probably got the sequencing wrong. We thought we would need a lot more very narrow data. So we all had this idea that we were going to have to have a lot of glasses that did some good stuff in the market for a long time. And then you could get to the AI that made it even more useful.

If you think about machine learning before large language models, it was always that way. There was some kind of use case that created value. For instance, Facebook existed before News Feed was ranked. Then you ranked News Feed badly. Over time, you learned how to do it better. All AI systems to date started with some non-AI thing to get a dataset that could allow you to power an AI thing that got better over time. Search worked that way. Initially, it was like Yahoo — just a list of pages.

These large language models are of a very different character. There’s a great deal more generality to them. I don’t have to get the exact, perfect training data anymore. We believe now that we have an AI that’s going to be valuable as the first cornerstone of these [Meta] devices.

Look at the Meta Ray Ban glasses. We believed this was going to be a great product with just camera video, live streaming, great music, good for calls. That’s it. We were pumped. Six months ago, we’re like, ‘We’ve got to get the assistant on it.’ Now, it’s the best feature of the glasses. Do you know how crazy that is to have a piece of hardware whose killer feature changes six months out? That does not happen.

For us, how we’re looking at the entire portfolio now is like, ‘Ok, the sequencing is backwards.’

Reuters/Carlos Barria

Q: But is the data gathered from the glasses and the Quest headset still valuable?

A: The most valuable data that’s coming from the glasses is how people are using it. That’s just a classic hotspot. Where are the opportunities in the product? The truth is, with these devices, we’ve designed them so that the content isn’t coming to us. It’s on your phone. So unless you submit a bug report, or even if you opt in to all the different ways that you can share data with us, it’s mostly telemetry.

Q: How do you think AI is going to come into play with the Quest?

A: The irony is some of the things that you really want to do on Quest actually don’t have great training sets. If you think about text, you have the entire internet. You think about photos, you have these huge libraries on Facebook, on Instagram. But there’s not a great big canonical library of 3D objects, let alone 3D objects that animate in four-dimensional space. That’s what you really want. We’re doing that work to try to make sure that we improve the modalities to include being able to export 3D things. So in some ways, mixed reality and virtual reality are harder, because you actually have this additional dimension that you’re responsible for.

On the flip side, there are obviously huge advantages in mixed reality and VR, where they have sensors that are always on. They’re always scanning and sensing the room. So there is certainly high potential there. Some of the most obvious use cases actually take a lot more work for us. We’re doing the research, we’re seeing some early promising results on 3D and 4D spaces.

For the rest of the conversation, including what Boz thinks of the open-source debate, read here. →

PostEmail
Semafor Stat

Number of child sexual exploitation images found in a major dataset used by Google, Stable Diffusion, and other companies to train artificial intelligence models. Researchers from Stanford University found that LAION-5B contained 3,226 suspected instances of child sexual abuse, about a third of which they validated externally. The dataset is managed by LAION, a nonprofit organization that creates open-source machine learning tools. It has temporarily taken down the resource in response to the findings.

PostEmail
China Window

Google’s Gemini is having an identity crisis. When some users tried chatting with the new AI tool in Mandarin, it claimed instead to be Ernie, a competing model developed by the Chinese tech giant Baidu. “My founder is Robin Li (李彦宏), the co-founder and CEO of Baidu,” Gemini wrote to one user. “He is a Chinese entrepreneur and computer scientist.”

It’s not clear what might be causing the mixup. Some commentators speculated that Google may have trained Gemini on millions of web pages from the Chinese internet, where sharing Ernie chatlogs has become increasingly common. Earlier this month, Elon Musk’s Grok chatbot similarly told users that it was run by OpenAI, an issue that staff attributed to problems with its training data.

But there’s also the possibility that companies are using their competitor’s technology in more direct ways. TikTok’s Chinese parent ByteDance, for example, reportedly relied on OpenAI’s API to create its own large language model, violating OpenAI’s terms of service, according to The Verge. Regardless of the specific approach, building powerful AI tools requires ingesting much of the open web, and offerings from different firms are converging as they all feast on the same data supply.

PostEmail
Hot On Semafor
PostEmail