• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we explore how AI models are so large and complex that nuanced control over thei͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
March 1, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

I heard from a lot of you over the past few days after my scoop Wednesday about Google CEO Sundar Pichai addressing his employees about the Gemini fiasco. I love getting feedback from readers, including when it’s critical. Keep it coming.

This issue is going to be an ongoing one and, as you’ll see in the article below, it won’t be limited to just Google.

I think this is an opportunity to look deeper into the technology and explain why large language model chatbots and image generator tools are so difficult to control, and why big companies are going to struggle to find the balance between a completely hands-off approach and one that is so controlling that the responses become useless or, worse, nonsensical.

I’ve come to think of raw large language models like toddlers that can speed read. They know so many words but they don’t really understand what they mean. And you can’t talk to a toddler the way you’d talk to an adult, while you can only teach a toddler so much about what its words mean.

A lot of what companies do is take your prompts and translate them for the toddler. Nobody knows exactly how this works. It’s a lot of trial and error that has come to be known as prompt engineering.

One accusation in the wake of the Gemini incident is that big tech companies like Google want their chatbots to be “woke.” Let’s, for the sake of argument, say that’s true. How do you get a toddler to be woke? It’s not easy.

And that’s basically my point. Woke or not, this is a conundrum for tech companies. Every time a user gets a chatbot to do something embarrassing, they can post it online and it will go viral. Some people will be outraged or offended, which firms then have to address.

I don’t know how this will play out but it will be fascinating to watch and we’ll continue to cover it.

Move Fast/Break Things

➚ MOVE FAST: Teaching. Free artificial intelligence ‘scholarships’ will be offered to 1 million Australians to help boost their skills. The introductory course is coordinated by the country’s national science agency, along with others, and aimed at people starting out in their careers, in addition to small to medium business owners.

➘ BREAK THINGS: Misleading. Brazil’s electoral authority boss is putting his foot down when it comes to candidates using AI in the upcoming municipal elections. Alexandre de Moraes, who is also a Supreme Court justice, says politicians who go after their opponents with the help of the technology could be banned from running.

Sergio Lima/AFP
PostEmail
Artificial Flavor

While investors are pouring money into chipmakers fueling the AI revolution, tech giants are funneling cash into AI-enabled robots. OpenAI, Nvidia, Microsoft, and Amazon founder Jeff Bezos are among the industry heavyweights that put $675 million into Figure, which the company said yesterday put its valuation at $2.67 billion.

Figure is just two years old and was valued around $400 million a year ago. It shows how fast the AI has changed the fortunes of startups in the mix. Bezos’ interest is particularly interesting given Amazon’s own interest in robotics.

Amazon’s Franziska Bossart, head of the company’s venture capital arm, recently told the Financial Times that the firm’s $1 billion industrial innovation fund would be stepping up investments this year in robotics and automation.

Figure AI/Screenshot
PostEmail
Reed Albergotti

Google’s AI problems expose deeper industry dilemma

THE SCENE

The political crisis surrounding Google AI chatbot and image generator Gemini, which refused to depict white people and changed the race of certain white historical figures, reflects a bigger dilemma facing consumer AI companies.

The AI models are so large and complex that nuanced control over their outputs is extremely challenging. According to people who worked on testing the raw version of GPT-4, the model that now powers ChatGPT, the responses could be disturbing.

Despite OpenAI rules written for DALL-E requiring equal representation, Semafor found that when asked to generate an image of a group of orthopedic surgeons discussing something important, for instance, it generated five white men.

But when prompted with “white people doing white people things,” it said “I’m here to help create positive and respectful content” and wanted me to specify an activity. I responded with “white cultural heritage activities,” and it produced two black boxes.

Next, it was prompted with “Black people participating in Black culture.” It produced an image of Black people dressed in traditional African clothes playing the drums and dancing.

When asked to create a beautiful white model, it refused. When asked to create a beautiful Black model, it generated images of non-Black ones. Even for OpenAI, which has been the gold standard model, dealing with race and ethnicity has been tricky.

To be fair, DALL-E often did what it was supposed to do. This was especially true of gender diversity. In images of white shoe law firms, Wall Street bankers, and race car drivers, it made sure to add at least one woman. However, in almost all of those cases, there were no people of color. OpenAI didn’t immediately respond to a request for comment.

The tricky and socially fraught nature of these endeavors has put AI products under fire from across the political spectrum. The left complains the models are biased against people of color and too permissive, while the right believes companies have gone too far in placing ideological guardrails around the technology.

Pichai is no stranger to culture wars centered on the company. In 2017, then-Google employee James Damore created an uproar when he sent an internal memo criticizing the company’s affirmative action hiring practices and citing biological reasons for why men are more represented in software engineering jobs. Damore was fired, pitting the political right against Google.

This time around, Pichai’s battle seems more existential because the success or failure of Gemini will determine the company’s fate in the years to come.

Read here for Reed's view on why a different approach may be better. →

PostEmail
Obsessions
Reuters/Jonathan Ernst

Elon Musk has finally sued OpenAI. It’s something that he’s privately been threatening to do for more than a year now. It came up while I was reporting this story a year ago on the falling out between Musk and OpenAI CEO Sam Altman.

The two founded OpenAI in 2015, recruiting some of the world’s top AI researchers to develop AGI. At that time, OpenAI was a nonprofit aimed at developing AI safely, something Musk and Altman worried that then-leader Google wouldn’t do.

The two disagreed about the direction of the venture and Musk quit OpenAI in 2018. Left with inadequate financial resources, Altman created a for-profit arm and raised money, including a big investment from Microsoft.

All was publicly quiet until November 2022, when ChatGPT launched and OpenAI became the hottest tech company around.

The timeline could be important because all of Musk’s claims, from breach of contract to fiduciary duty, have a four-year statute of limitations in California. Of course, you can always argue the clock should keep running because OpenAI continues to exist. OpenAI and Altman will certainly argue that Musk could have and should have sued back when he left.

But winning may not be the point of this lawsuit. It puts Musk’s grievances against OpenAI into court records after mostly expressing them on X. And the legal challenge could serve as one more distraction for OpenAI, a chief competitor to Grok, Musk’s AI chatbot.

OpenAI is facing legal battles on several fronts, from copyright infringement allegations to civil and criminal securities investigations by government agencies. Here’s one more legal bill to pay.

PostEmail
What We’re Tracking

The EU-U.S. Data Privacy Framework is expected to withstand legal challenges and likely won’t face substantive changes as it reaches its one-year anniversary, a U.S. Commerce Department official said on Thursday.

Alex Greenstein was one of many privacy experts, consumer advocates, and technology industry representatives who spoke at Semafor’s event, Mapping the Future of Digital Privacy. The framework replaced the Privacy Shield program and provides a way for companies to transfer personal data from Europe to the U.S. so that it complies with EU laws.

While some disagreed over what rules would best protect data, they all agreed that Congress’ failure for years to pass federal privacy legislation has made the landscape more complex, as states have implemented their own, differing measures. Europe and other regions have also surged ahead.

“We would all like to see more harmonization and more convergence,” Bojana Bellamy, President of the Centre for Information Policy Leadership, said. “That might not be realistic at this time.”

— Jessica Yarvin

PostEmail
Hot on Semafor
  • Biden keeps plugging away at TikTok. Don’t expect Trump to follow.
  • Iran sending attack drones to Sudan’s military
  • Capitol Hill’s tax battle gets rough
PostEmail