On the last day of the World Economic Forum, I have a takeaway on AI: The fear world leaders have about the threat posed by the technology in elections scares me more than AI itself. The WEF said the most severe short-term risk to the world was misinformation and disinformation amplified by artificial intelligence. Like any new tool or innovation, AI obviously carries risks. I just don’t think we have the slightest idea yet what those risks are. It’s debatable how much of an impact misinformation and disinformation have on the outcome of elections. And while new generative AI tools can be used to create such disservices at higher volumes, it doesn’t really amplify it. Plus, our communication channels are already being stuffed with bad information, so more of it likely makes only a marginal difference. You might argue that it can’t hurt to be overly cautious about this new technology, and that we need to regulate it now, before it’s too late. But there are some dangers to overreacting to the threat of AI-powered disinformation. The censorship machine that ramped up after Donald Trump’s election was well-meaning. But in a lot of ways, it was counterproductive, fueling distrust among conservatives when they were banned from social media platforms for expressing skepticism about the Covid vaccine, for instance. The fear of AI’s role in the upcoming elections sounds a lot like the censorship machine revving its engines, putting pressure on ill-equipped platforms to again become arbiters of truth. Reuters/Denis BalibouseThe other danger may be more long term. What happens if AI doesn’t play a big role in the 2024 elections? If the technology doesn’t progress as rapidly as some predict, it may be harder to convince many people that it poses a real threat. But the development of AI won’t stop. It will continue to get better and more powerful until, in several years, it becomes ubiquitous and woven into everything we use and has revolutionized every industry. It’s at that point that the risks of AI will become apparent, and they likely won’t be the same ones we fear today. Will we be less prepared because we cried wolf in 2024 and then took our eye off the ball? I think that’s likely. Years before Facebook’s Cambridge Analytica scandal, the big criticism of the platform and other social networks was that they violated the privacy of their users. For a moment, there was an uproar about some of Facebook’s data-gathering tactics. But nothing bad seemed to happen to those users, and eventually, the criticism died away. What most missed was the way ad targeting, combined with misinformation, could exploit the company’s data gathering techniques in an attempt to manipulate people. That was largely ignored until it was too late. We could see a replay when it comes to AI. |