The News
It’s the most interesting tech scandal in recent memory: Actress Scarlett Johansson accused OpenAI of deliberately — and creepily — copying her voice to make ChatGPT sound like her AI character in the movie Her.
OpenAI denied this and has paused the feature, but it has sparked widespread outrage, with the union representing Hollywood actors and other artists on Tuesday urging Congress to pass legislation to protect people like Johansson.
In this article:
Reed’s view
The bigger truth to this whole ScarJo/OpenAI fight, overshadowed by the drama, is that we’ve finally figured out the central front in the coming political, cultural, and financial war over AI.
As AI goes “multimodal,” ingesting and producing text, video and audio, imitations of popular artists of all kinds will come pouring out, pitting them against some of the biggest tech juggernauts. This was already happening, but the Johansson mishap puts it into focus in a new way.
This resolves a period of public and political confusion that began with the release of ChatGPT 3 in 2022. That prompted a collective freak-out about existential risk — murderous robots, machines killing humans to make paper clips, computers spraying chemical weapons. The boring questions in the legal field on intellectual property stayed in the background, even though big media companies and book authors opened what is likely to be years of litigation.
The Johansson situation, on the other hand, was injected directly into the zeitgeist, and it has shoved aside questions of existential risk. This will be a focusing moment for opinion leaders and regulators, and marks the beginning of the next chapter of AI — in which the industry will face a new, energetic attempt to rein it in.
The episode will breathe life into the more than 420 AI bills being proposed in state legislatures in the US alone. Elected officials will surely use the Johansson case to illustrate their arguments. Some of those bills directly concern the replication of real people using AI. And every photo, video and piece of audio that AI produces was created through training on real people. It may be difficult for AI companies to ensure their outputs aren’t copies of someone’s work.
These concrete moves lack the drama of the hazy term “AI Safety,” whose definition had been expanding to include everything from the threat of human extinction to the diversity of AI-generated images.
The funny thing about this mess is that the core issue only tangentially concerns AI. The voice in question was based on a real actress. The flirty, Her voice demonstrated last week was pulled before it could be used by the public. It seems unlikely there will even be a lawsuit here. It’s much more about a sense of right and wrong and that kind of thing always resonates more than a lawsuit. (There’s always the possibility of an actual lawsuit.)
The facts are as simple as a playground spat. OpenAI asked Johansson permission and she said no. OpenAI did it anyway. (It denies it did it on purpose). And if this can happen to the one-time highest paid actress in the world, what chance does anyone else stand against the juggernaut of AI?
Now, anyone who feels they’ve been wronged by AI in some way has a rallying cry: I am ScarJo.
Room for Disagreement
Google “what do people think about AI” and you will come up with hundreds of news articles about people being afraid of AI and its impact on society.
But polls show that, in the end, people are using this technology at work and in other aspects of their lives. The adoption curve is faster than the internet.
ScarJo controversy or not, useful technology finds a way to make inroads and people will use it.