The News
Microsoft is moving full speed ahead on its AI efforts, with dozens of product announcements planned in coming months, according to people familiar with the matter, despite a backlash against its Bing Chat, whose responses have gotten, at times, a bit….creepy.
The software giant plans to further integrate the AI models into other products, like possibly adding chat-like capabilities to Office 365 programs such as Outlook and Word.
Microsoft has instituted new safeguards and limitations after the chatbot-assisted search engine threatened to hack people, told journalist Ben Thompson that he was a “bad researcher,” and professed its love for Kevin Roose of The New York Times.
These instances are “hallucinations,” the term used to describe when large language models get confused and spew gibberish inspired by what real people have posted on the internet. The hallucinations sound so human and convincing that it has prompted some people to call for government regulation of the technology and others to describe it as “scary.”
Reed’s view
The wacky hallucinations are a distraction from the real issues. The chatbots could be used for good or evil, and might even require regulation — but not because you can make them write you into your very own science fiction story.
At worst, the new misconceptions about Bing and OpenAI’s ChatGPT are amplifying the mythology, spread by some Silicon Valley technologists, that this advancement is on a path toward sentience, or “Artificial General Intelligence.”
The current AI models are impressive, but the technological breakthrough required to train a computer to think like a human hasn’t happened yet. It may never happen.
We should be having national and global conversations about how to deal with potential abuses of this technology, from using it to emotionally manipulate people to whether it violates intellectual property laws.
The nefarious uses will probably involve more behind-the-scenes, focused efforts. For example, advertising companies and nation-states could more efficiently generate content meant to manipulate online audiences. NordVPN, a security provider, said in a recent report that hackers on the “dark web” have been discussing ways to leverage the power of ChatGPT to create phishing attacks and create malware.
Yes, people have taken great pains to elicit responses from these services that sound like lines from Fatal Attraction or Terminator, and may have been drawn from them, but those responses don’t represent anything more than a math program arranging a bunch of letters based on context clues.
These chatbots, despite scanning the entire internet, aren’t capable of deciphering right answers from wrong ones. This widely-noted limitation illustrates how far these models are from true “general intelligence.”
While powerful, ChatGPT and Bing’s AI are, at their core, a new way of organizing information on the internet. These chatbots can’t infect your computer with a virus, or publicly discredit you. A human would need to do that.
As guest columnists Russell Wald and Jennifer King argued in Semafor last week, it’s important we put this technology under a microscope to better understand its strengths, weaknesses, and risks.
Right now, a lot of the media coverage about AI chatbots is doing a bad job of framing the issue. That’s in part because of muscle memory developed in the wake of the 2016 election, when misinformation and disinformation became the focus of technology coverage.
In hindsight, the hysteria over that issue turned out to be overblown. It’s an even bigger mistake to try and paint chatbot hallucinations into the latest Big Tech panic.
Know More
The biggest and most world-changing uses for the latest AI models behind products like ChatGPT and Stable Diffusion will be inside businesses — not directly in the consumer market.
OpenAI, for instance, sells access to its models to ventures that can use the underlying technology to power products they sell to other businesses.
Jasper, which sells AI tools for the marketing industry, has already taken off. And computer programmers are using AI tools like Replit to build software in record time.
Microsoft has spent billions of dollars building out its Azure servers with custom architecture meant to run AI models, not because it wants to build a better search engine, but because CEO Satya Nadella sees an opportunity to be the go-to place for businesses to spin up AI-enabled services.
When companies use these models, they’ll likely add additional “layers” atop the software. For instance, Walmart could hypothetically use its vast collection of customer data to create a chatbot limited to questions relevant to its business, which may ensure a level of accuracy that’s not possible with chatbots allowed to answer any question.
General-purpose chatbots will likely never be accurate or reliable enough to trust what they produce, and users will probably always have to check responses in search results.
The best consumer use case for AI is likely as a productivity tool. When it is incorporated into email, word processors, and communication tools, it will be like a search engine for your life. Instead of trying to find an old email, you’ll just describe and ask an AI to find it for you. Trying to figure out the right spreadsheet formula will be a thing of the past.
These models will probably never be a good conversation partner you can count on for emotional support. That’s another job still reserved for humans.
Room for Disagreement
Computer scientist Timnit Gebru, founder and executive director at the Distributed AI Research Institute, argues AI chatbots should not be used for internet searches.
The promise that an AI chatbot can become so intelligent that it can answer all questions, including medical ones, is “dystopian,” she said.
Instead, the technology should be used for “well-scoped, well-defined products used for a specific thing.”
The View From China
As Louise recently explained, one of the reasons these newer, more advanced AI chatbots were designed in the U.S. first, and not in China, where AI chatbots have for years been more popular, is censorship.
Bing’s chatbot never would have been released in China if there was a possibility it might say something it shouldn’t say.
By taking that reputational risk, Microsoft has now gathered invaluable data on how to control its AI and will make further advances with the feedback.
Notable
- “Whatever you are looking for - whatever you desire – they will provide,” wrote Dr. Terry Sejnowski of the University of California, San Diego in this New York Times article about why chatbots can say the darndest things.