Apple Chooses Google Gemini for Major Siri Upgrade

The rare alliance will give Siri the brain it’s always needed
Vibrant digital illustration of an iPhone with the Apple logo connecting to a colorful Siri voice waveform and the Google logo, representing the new AI partnership.
Text Size
- +

Toggle Dark Mode

It’s official: Google’s Gemini will be the brains behind the more personalized Siri that’ coming this year. While we’ve been hearing rumors of this move since August, the two behemoths released a joint statement today announcing a formal partnership — and it goes far beyond just Siri.

It’s not exactly a secret at this point that Apple has been struggling to make Siri live up to its potential. It’s rather sad, considering the head start that Apple squandered. Siri beat out Amazon’s and Google’s popular voice assistants by about three years, but it seems the company never quite figured out what to do with it after its two biggest champions, Steve Jobs and Scott Forstall, were no longer with the company.

Despite a spark or two of hope in 2019, Siri remained dumber than a bag of rocks, useful only for routine voice operations like setting timers, turning on lights, and checking the weather. Ask it anything more complicated, and you were likely to be sent to a web browser, at best.

This Limited-Time Microsoft Office Deal Gets You Lifetime Access for Just $39

Sick and tired of subscriptions? Get a lifetime license for Microsoft Office Home and Business 2021 at a great price!

When Apple unveiled Apple Intelligence during its 2024 Worldwide Developers Conference (WWDC), it promised that features like Writing Tools and Image Playground would soon be joined by a smarter Siri, making many believe that our long Siri nightmare might soon come to an end. What Apple showed off on stage was quite impressive, but it also turned out to be far from ready for prime time. Industry insiders disagree on whether the tag “vaporware” should apply here, but Bloomberg’s Mark Gurman was told by sources that “the company barely had a functional prototype.”

Despite being expected for iOS 18.4, Apple made a rare public announcement that it would be delayed into early 2026 as it needed more time to get it right. However, Apple made some foundational changesthat applied to more than just the AI models — changes that seemingly included searching around for a little help.

Somewhere along the way, Apple realized that it had outplayed its hand and wasn’t going to be able to pull this off on its own — at least not in time to avoid being left in the dust by competitors. So, it started talking to the other big AI players: Anthropic, OpenAI, and Google. When the dust settled, it seemed to be leaning toward Google’s Gemini, and now it’s made its final decision.

After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users.

Joint statement from Google and Apple

Apple also shared the news with CNBC, but it appears to be nothing more than the same statement that Google published earlier today, and both companies declined provide any further comment beyond the statement itself, which was also posted to X.

Not everyone is celebrating the union, however. Elon Musk took to X shortly after the announcement, calling the deal an “unreasonable concentration of power,” and noting that Google now effectively has its fingerprints on Android, Chrome, and the iPhone’s core intelligence. Of course, that’s likely just sour grapes; not only is Musk suing Apple over its OpenAI partnership, but from all the reports we’ve heard, Grok was never on Apple’s list — not too surprising as Musk’s creepy little chatbot doesn’t seem to be good at much more than generating nonconsensual deepfakes.

Is Apple Intelligence Sharing iOS Data With Google?

Even when the talks between Apple and Google were purely in the realm of rumor and speculation, one thing seemed clear: This isn’t going to be Gemini on your iPhone. There’s still an app for those who want that, but what Apple and Google are cooking up is more fundamental, and relies entirely on using Google’s AI models on the back-end.

The joint statement makes that clear, since it notes that “the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology” (Emphasis ours). Apple’s side of the statement also adds that Google’s AI technology — it notably avoids using the word “Gemini” — will power “Apple Foundation Models.”

In other words, this is about using Google’s large language models and underlying code to build Apple language models. That’s an important distinction that goes beyond branding, since it also means that Apple won’t be compromising on privacy. It’s taking that just as seriously as it did in 2012 when Google wanted user data in exchange for map data, which ultimately led to Apple walking away from the deal and building Apple Maps.

While the partnership is deep, the data wall remains high. Unlike the Gemini app you download from the App Store, this integration uses Google’s models as a foundation for your data — not a destination.

Data Type How it’s Handled in this Partnership
Personal Context Stays on-device. Apple’s chips handle the “who, what, where” of your life.
Complex Queries Sent to Apple’s Private Cloud Compute. Google provides the model, but Apple provides the “vault.”
Model Training Both companies have confirmed that user data from this partnership will not be used to train future Google Gemini models.
Identity Requests are anonymized; Google sees a prompt, not an Apple ID.

This is still “Apple Intelligence” as far as Apple is concerned, and it will continue to run in much the same way. Google will provide the models and code, but Apple will have the control over it.

Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

Joint statement from Google and Apple

The other subtle point is that these models will extend beyond Apple’s voice assistant. The statement says they’ll be used for “a more personalized Siri coming this year,” but also strongly implies that they’ll power other “future Apple Intelligence features.”

We can only speculate on what those are, and whether Apple is talking about yet-to-be-introduced features or upgrades to existing things like Image Playground. It’s not hard to imagine the Nano Banana models — Google’s specialized, high-efficiency versions of Gemini — giving Apple’s AI image generation a massive upgrade, but it’s likely the two companies have even more ambitious plans. Apple has reportedly been working on an AI health coach for some time, and while there’s no word on how far along that is, there’s little doubt that Google will give it the shot in the arm that’s much needed after ChatGPT unveiled its own AI health service this month.

It’s unclear when the first Google AI-powered features will begin showing up in Apple Intelligence, but if Apple is still on track to release its big Siri upgrade in iOS 26.4 this spring, we probably won’t have to wait long to see it. However, for now it’s business as usual for Apple Intelligence users, and we suspect that folks won’t notice many obvious changes even after Gemini technology gets fully baked in — beyond smarter and more useful responses, that is.

This is a smart addition. High-value FAQ blocks are excellent for capturing “zero-click” search results and providing a quick summary for readers who are skimming the page.

Here is a 3-question FAQ block tailored for your article, including a visual breakdown of how the partnership actually functions.


Frequently Asked Questions: The Apple & Google AI Deal

  1. Will my private Siri requests be used to train Google’s AI? No. Both companies have confirmed that any data processed through this partnership is stateless. This means your queries are anonymized and isolated within Apple’s Private Cloud Compute environment. Google provides the model architecture, but they never “see” the raw user data, and it is explicitly not used to train future versions of Gemini.
  2. When will I actually get the “Gemini-powered” Siri on my iPhone? The major Siri overhaul is currently slated for release with iOS 26.4. While Apple hasn’t given a specific calendar date, this version is widely expected to enter public beta in February 2026, with a final release in March or April 2026.
  3. What exactly is “Nano Banana”? Nano Banana was the viral internal codename for what Google officially calls Gemini 2.5 Flash Image. It is a specialized, high-efficiency model designed for lightning-fast image editing and generation. Apple is expected to use this specific technology to power its Image Playground and Genmoji features to ensure they remain snappy on mobile hardware.

Sponsored
Social Sharing