Apple and Google are reportedly in cahoots to integrate features from Google’s Gemini generative AI service into iOS. Bloomberg broke the news, which was later corroborated by The New York Times. If the deal pans out, it will be a huge collaboration between two tech giants who have long duked it out in the hardware and software space.
It also raises lots of questions about how Gemini would function on Apple’s devices—and which company would remain in control. Neither Apple nor Google have publicly addressed the news, and neither company responded to requests for comment before this article was published.
There’s also the possibility that the deal could fall through, seeing as how the hype around such a collaboration is drumming up some unwanted attention. “In the past, this leak would have killed the deal,” says Michael Gartenberg, a technology analyst and former director of marketing at Apple. “The first rule of doing a deal with Apple is don’t talk about Apple.”
But in this case, Gartenberg says, it’s highly likely the deal will in fact pan out. For one, Apple needs it to happen. When all the most breathless tech innovations over the past year and a half have been related to AI, Apple needs to prove that it’s in the game, too. Not to mention that Google has announced it is bringing its on-device AI service, Gemini Nano, to the Pixel 8 very soon, a signal that the mobile AI explosion is set to take off.
Apple has languished behind the other big gen-AI players like OpenAI, Microsoft, and Google. The company has big plans for its own internal large language models, but whatever tools it’s cooking up are not yet ready to be released into the world. That slowness, Gartenberg says, puts Apple in a position of looking like it has been caught off guard by the broader generative AI movement.
“The competition is fierce,” says Patrick Moorhead, founder and principal analyst of Moor Insights & Strategy. “You’ve got all of Silicon Valley competing for this hardcore talent, and Apple missed this one.”
There’s a ticking clock putting pressure on the company, too. WWDC, Apple’s big software development conference and product announcement showcase that usually takes place in June, is looming. As it approaches, those simmering expectations about the company’s generative AI strategy will reach a boil.
“An Apple response of just focusing on face computers or adding more widgets is going to feel fairly hollow,” Gartenberg says, because when it comes to AI, “Apple really needs to have something it can show by June 2024. There is a deadline here for people looking at Apple and saying, what is your story?”
Apple clearly feels that pressure. It recently scuttled its self-driving car plans to refocus those resources on its internal generative AI efforts. And now it’s partnering with Google to bring new AI capabilities to its most popular device.
So, assuming the deal does go through, what might Gemini look like on the iPhone?
First off, Gartenberg says it will likely manifest with a distinctly un-Apple label.
“It would probably be something Apple couldn’t hide under its own brand,” he says. “Perhaps it would be a setting where you could select your assistant, where it could be Siri classic or Siri the sequel. And if I’m Google, I’m going to hold out for some kind of branding on this.”
He points out that the default search engine on iOS now is Google Search, and it isn’t rebranded as an Apple service there. Any AI features powered by Gemini would probably warrant the same flashing neon lights, especially at a time when Google is very motivated to show off its AI chops.
Apple will also likely keep focus on its own ambitions. Siri, the occasionally helpful and much maligned voice assistant, has long lagged behind other digital assistants. Don’t call it a glow up, but Apple will likely be looking to Gemini-infused AI advancements to breathe new life into its floundering digital helper.
“I think that they will double down on Siri and be like, ‘This is the Siri we had envisioned when we introduced it 10 years ago,’” Moorhead says. “Essentially, it’s going to do the same thing, with a higher degree of value. It’ll be something that actually works.”
This juiced-up Super Siri could become a fully fledged chatbot, with integrated conversational AI that can stare deep into your life. It’s likely to power real-time language translations, however fraught that may prove. Apple could also use Gemini to power advanced photo and video editing techniques, such as swapping out backgrounds, combining multiple photos to get everyone’s face just right, or using AI-powered editing tools to manipulate photos more wholly.
Image creation capabilities will probably be on the table, like something generated with Dall-E or Midjourney. Moorhead suggests Apple could even incorporate this kind of feature into Siri, such as using a voice command to ask the digital assistant to “make that background blue” or to “make this picture a sunny day,” and then see the results right there in your photo roll.
One big feature that Moorhead says is expected on AI-powered phones across the board—not just iPhones, but Android phones too—is enhanced AI snapshots of your life. The idea here is that on-device AI could make a record of everything happening on your phone throughout the day, then compile all that information and keep it at the ready to be recalled later.
“The runaway hit is going to be snapshots,” Moorhead says. “For people like me who don’t remember anything and have to write everything down, this is going to be great.”
These are, of course, all features that companies like Google and Samsung have touted before, or are at least already working on. But Apple is Apple, and while it is often not the first company to bring new innovations to market, it has a way of making its execution of an idea more enticing or easier to use—even when it’s forced to incorporate another company’s technology.
“There’s an opportunity here for Apple to talk about how the new generation of artificial intelligence meets Apple and Siri, and produces something better,” Gartenberg says. “It’s not going to be enough for them to just deliver the basic generative AI stuff. They’ve got to be able to say they’ve taken the Google stuff and are actually going beyond that.”