• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


icon

Semafor Signals

Biden robocalls point to power of AI to undermine elections

Insights from WIRED, Public Citizen, and Rest of World

Arrow Down
Jan 24, 2024, 6:08pm EST
d3sign via Getty Images
PostEmailWhatsapp
Title icon

The News

New Hampshire’s attorney general is investigating a wave of AI-generated robocalls that mimicked President Joe Biden’s voice, in what it called an unlawful attempt to disrupt the state’s primary election and suppress voters. The calls discouraged voters from going to the polls, telling them it was “important that you save your vote for the November election.”

The robocalls highlight the growing influence of AI on elections, leaving social media platforms and artificial intelligence firms scrambling to address global disinformation campaigns.

AD
icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

AI election deepfakes are still poorly regulated

Source icon
Sources:  
Public Citizen, Brennan Center for Justice, TIME

Congress has yet to pass a national bill to regulate the artificial intelligence industry, leaving it up to states to combat the spread of deepfakes – digital images, video or audio that convincingly swap out a face or voice to purport to be someone else. Almost 30 states have plans to regulate deepfakes in elections where they could be used to mislead voters, undermine trust in the electoral process, and even alter the outcome of an election, according to one advocacy group. Others including Texas and California have already enacted laws allowing for civil action in the case of political deepfakes, while Michigan has imposed requirements for campaigns to disclose the use of AI in political ads. However, as one digital technology researcher told TIME magazine, this may be of limited use as “the bad actors who want to abuse generative AI will probably not disclose it.”


Tech companies are failing to address election misinformation

Source icon
Sources:  
Foreign Policy, The Guardian, CNN

YouTube last June stopped removing content that spread false claims about the 2020 U.S. election, while in 2022 Instagram and Facebook began allowing political ads that questioned vote’s outcome. A swath of layoffs at X and Meta of content moderators and trust and safety teams have made the platforms “ripe for abuse and vulnerable to bad actors,” a study by media watchdog Free Press found. In elections outside of Western democracies the situation is even worse, as Silicon Valley’s social media giants hold “major blind spots in local languages and context, making misinformation and hate speech not only more pervasive but also more dangerous.” Foreign Policy wrote. And although the ChatGPT-maker OpenAI has banned users from building applications for political campaigns, The Washington Post found that these rules were poorly enforced.

Politicians use AI confusion to “destabilize the concept of truth”

Source icon
Sources:  
The Washington Post, Rest of World, WIRED

The spread of artificial intelligence sows confusion across the political landscape, offering easy plausible deniability for people in power, one digital misinformation analyst told The Washington Post. For example, footage of alleged murky dealings by politicians or a voice recording criticizing their opponents can easily be dismissed as fakes. Leaked recordings of alleged nefarious conduct by politicians in Taiwan and India have been met with denials and accusations of machine manipulation, showing how AI can “destabilize the concept of truth itself,” the analyst said. Fact-checkers from Poland and Slovakia told WIRED that it is “very hard to react” quickly to AI-generated disinformation in the final hours before a vote, and voice cloning is harder to identify than video deepfakes. After an Indian lawmaker in Tamil Nadu denied the veracity of two audio recordings that had sparked controversy in the state, Rest of World asked AI analysts to examine the clips – and found them divided on their authenticity.

AD