• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at the hype around election misinformation amplified by AI, and whether ͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
January 19, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

I’m writing this on Thursday night in Davos, hunched in the corner of a party with my laptop. It’s been a hectic week of moderating panels and meeting new people. While the World Economic Forum is all about bringing global leaders together, it really felt like a tech conference to me, but with the decidedly un-techy, color-coded badges delineating your WEF social status.

In other words, this is a place of stark contrast everywhere you look: Men wearing expensive business suits and grubby snow boots. Everyday people riding the train from Zurich to hit the slopes while wealthy meeting attendees pay no attention to the majestic peaks surrounding them. A panel on building diversity in AI that wouldn’t let you in if you had the wrong color badge.

The big topic this year was, of course, artificial intelligence. Specifically, the world’s political and business elite wanted to talk about how this new tool will affect the outcome of this year’s batch of highly consequential elections around the globe. It doesn’t seem like this issue got figured out and if it did, I’m not sure the Davos set came to the right conclusions. More on that below.

Move Fast/Break Things

TSMC

➚ MOVE FAST: Selling chips. The AI boom spurred Taiwan’s TSMC to boost its 2024 revenue forecast by more than 20%. CEO C.C. Wei said customer demand for high-end semiconductors is so strong that the company doesn’t have enough capacity to meet it — comments that lifted the whole sector. But Wei added that the mature node market is still in a lull.

➘ BREAK THINGS: Producing chips. An insufficient number of skilled workers and uncertainty about U.S. government subsidies are holding up a new TSMC plant in Arizona. Last month, a Korean news outlet reported that a Samsung factory in Texas was also facing delays, though the company said it still plans to be operational by the end of this year.

PostEmail
Artificial Flavor
Rie Kudan

A Japanese author disclosed she used generative AI to write her award-winning novel, a sign of the technology’s growing impact on culture. Rie Kudan, 33, announced at the Akutagawa Prize ceremony this week that about 5% of her book Tokyo-to Dojo-to (“Sympathy Tower Tokyo”) included verbatim sentences generated by ChatGPT. Last year, a Chinese professor also used AI to write a science fiction novel in just three hours, and went on to win a national competition. Kudan said she turned to AI to help her find “soft and fuzzy words” that embodied the amorphous themes about justice present throughout her novel, which the prize committee called “flawless.”

— Helen Li

PostEmail
Obsessions

On the last day of the World Economic Forum, I have a takeaway on AI: The fear world leaders have about the threat posed by the technology in elections scares me more than AI itself.

The WEF said the most severe short-term risk to the world was misinformation and disinformation amplified by artificial intelligence.

Like any new tool or innovation, AI obviously carries risks. I just don’t think we have the slightest idea yet what those risks are.

It’s debatable how much of an impact misinformation and disinformation have on the outcome of elections. And while new generative AI tools can be used to create such disservices at higher volumes, it doesn’t really amplify it. Plus, our communication channels are already being stuffed with bad information, so more of it likely makes only a marginal difference.

You might argue that it can’t hurt to be overly cautious about this new technology, and that we need to regulate it now, before it’s too late.

But there are some dangers to overreacting to the threat of AI-powered disinformation.

The censorship machine that ramped up after Donald Trump’s election was well-meaning. But in a lot of ways, it was counterproductive, fueling distrust among conservatives when they were banned from social media platforms for expressing skepticism about the Covid vaccine, for instance.

The fear of AI’s role in the upcoming elections sounds a lot like the censorship machine revving its engines, putting pressure on ill-equipped platforms to again become arbiters of truth.

Reuters/Denis Balibouse

The other danger may be more long term. What happens if AI doesn’t play a big role in the 2024 elections? If the technology doesn’t progress as rapidly as some predict, it may be harder to convince many people that it poses a real threat.

But the development of AI won’t stop. It will continue to get better and more powerful until, in several years, it becomes ubiquitous and woven into everything we use and has revolutionized every industry.

It’s at that point that the risks of AI will become apparent, and they likely won’t be the same ones we fear today. Will we be less prepared because we cried wolf in 2024 and then took our eye off the ball? I think that’s likely.

Years before Facebook’s Cambridge Analytica scandal, the big criticism of the platform and other social networks was that they violated the privacy of their users. For a moment, there was an uproar about some of Facebook’s data-gathering tactics. But nothing bad seemed to happen to those users, and eventually, the criticism died away.

What most missed was the way ad targeting, combined with misinformation, could exploit the company’s data gathering techniques in an attempt to manipulate people. That was largely ignored until it was too late. We could see a replay when it comes to AI.

PostEmail
Semafor Stat

The number of Nvidia H100 graphics cards that Meta will have by the end of 2024, CEO Mark Zuckerberg said in an Instagram Reels post. It’s part of the company’s massive AI push, which includes research to work toward artificial general intelligence.

PostEmail
Watchdogs

While the U.S. may be ahead in developing AI technology, other parts of the world have made more advancements in regulating it. The latest example is draft guidelines from China’s Industry Ministry to come up with dozens of national standards by 2026. Meanwhile, MIT Technology Review notes that one thing to watch for this year is whether Beijing follows Brussels in coming up with a comprehensive AI act.

PostEmail
Quotable
“What is the difference between a man who exists and a machine that functions? This is perhaps the greatest question of these times.”

— Friar Paolo Benanti in an interview with the Associated Press. He is an advisor to Pope Francis on AI and tech ethics.

PostEmail
Hot On Semafor
  • Mike Johnson talks like a lawyer. That might be a problem.
  • Kenyan president’s war with judges emboldens his allies to defy courts.
  • Path forward for US aid to Israel grows more complicated.
PostEmail