The News
Google’s Android phones are about to get an AI upgrade.
The company’s flagship AI model, Gemini, will replace Google Assistant as the default service on Android phones in the coming weeks, the company announced Tuesday.
The move marks the beginning of the generative AI era for smartphone functionality, with Gemini gaining the ability to connect to core smartphone services like the utilities and calendar — a feature Google is calling extensions.
For instance, instead of opening up the calendar app to make an appointment, users will soon be able to snap a photo of an invitation and ask Gemini to create a calendar invite. Or ask YouTube Music to create a playlist by describing the style.
In time, the company told Semafor, it plans to roll out extensions to third-party apps and services like Spotify. It will also be available on iPhone through the app store, albeit with less ability to control the device.
Google also announced Tuesday a new service called Gemini Live, a chat service that allows users to have a free flowing conversation with the chatbot in real time, as if it’s a person.
Sissie Hsiao, general manager for Gemini experiences, told Semafor she’s been using Gemini Live while driving, having conversations about the Olympics and asking it to make her customized music playlists — without having to take her hands off the wheel or eyes off the road.
Hsiao said the new technology is allowing Google to do what its users have been asking Google Assistant to do for years: Understand them in more natural language and manage day-to-day tasks.
“They expect an assistant to be able to take off their plates many of these day-to-day tasks,” she said.
In this article:
Know More
The way Gemini accomplishes these new tasks on the back end is by writing code, Hsiao said. When users ask Gemini to make changes to their calendar, the AI model will write lines of code that will automatically add appointments and events to their phones.
Using code to complete tasks is a step toward AI models acting as “agents” that have the ability to complete multi-step tasks on their own without human hand-holding along the way.
Unlike Apple, Gemini will operate on the cloud, Google told Semafor, with commands sent from the device to powerful data centers capable of running the most advanced AI models.
Apple plans to run its AI models straight on the iPhone and other devices, which it said better protects user privacy and improves latency.
Apple hasn’t yet said when it will roll out its AI services
Reed’s view
This move by Google, while not unexpected, is a big deal. We spend a huge chunk of our lives on smartphones, and it’s where AI can have perhaps the biggest impact on the lives of consumers.
While smartphones have become powerful computers in their own right, they are still limited in their utility. Services like Gemini Assistant could unlock the full potential of your smartphone.
Let’s say you meet someone at a networking event and get their business card. Instead of saving the card and possibly losing it, your AI assistant can load it into your contacts with a quick photo, categorize how you know them and draft an email following up on the conversation the next day.
Or, let’s say you use behind-the-times expense software (no comment on whether this one is autobiographical). Just take a photo of your receipt and have your AI assistant add it to the proper expense report.
Even something as simple as adding things to your to-do list on the fly would be hugely helpful.
But as Google and Apple roll out this functionality, we should keep in mind the move isn’t without risk. There’s a high chance of embarrassment as these generative AI assistants are bound to screw up at times.
Notable
- Apple announced its own AI initiatives, dubbed Apple Intelligence, in June. In many ways, Apple and Google are in a new smartphone race to see who can best deliver AI features. Here’s a comprehensive list of everything Apple announced, but hasn’t yet launched.