Thursday, February 19, 2026
Google search engine
HomeTechnologyApple is betting its AI future on Gemini. Here's how it can...

Apple is betting its AI future on Gemini. Here’s how it can redesign the iPhone for you

One of the biggest announcements in the tech world – and between two of the biggest tech companies in the world – was summed up in a short joint statement of less than a hundred words. Apple announced that Gemini will power the rebirth of the Siri assistant and framework for AI software experiences on iPhones and Macs.

“These models will help support future Apple Intelligence features, including a more personalized Siri coming this year,” the company said. This is a big win for Google, great news for Apple device users, and a self-admission that Apple hasn’t been able to steer the AI ​​race to the same extent as Google, Meta or OpenAI.

The writing has been on the wall for some time. Apple reportedly once tried using Anthropic’s Claude models and OpenAI’s GPT models to power Siri. But the company ultimately chose Google, which is a huge endorsement of Gemini’s capabilities. Let’s break down what’s likely to come next for the millions of iPhone users like you and me.

So, uh, privacy?

There is a major dilemma with AI that is hard to ignore. AI chatbots are penetrating our lives deeper than social media ever did. Chatbots have access to our emails, calendar, gallery, files and of course our daily thoughts. Experts are already grappling with the growing problem of a deep emotional connection between humans and AI.

But that’s not the end yet. Every time we invoke an AI chatbot, data is sent to a company’s server for processing. In a few cases it is saved for model training or security reasons and you cannot disable it. The solution? AI on device. Gemini Nano, for example, is an on-device approach that runs on the local chip of your phone or PC.

No data ever leaves your phone. But it is slow and not that powerful. For media-related tasks or other demanding tasks, cloud processing is mandatory. So are you ready for it now that Google is pushing AI experiences on your iPhone and Mac, especially given its history? Well, Apple already has a solution for this and it’s pretty clear about privacy as Gemini will now power AI experiences.

“Apple Intelligence will continue to run on Apple devices and private cloud compute while maintaining Apple’s industry-leading data protection standards,” the company says. This means your data and AI interactions are only routed through private cloud compute servers built on custom Apple chips and the company’s own security-first operating system.

“We believe PCC is the most advanced security architecture ever deployed for large-scale cloud AI computing,” Apple claims. With PCC, data is encrypted as soon as it leaves your phone. And once the assigned task is completed, the user request and shared material will be deleted from the servers.

No user data is stored and anything that ends up on the cloud servers is inaccessible to Apple. Gemini simply provides the intelligence to process your text or voice commands. All subsequent work is handled securely on Apple’s secure servers rather than going to Google.

What’s next?

If you’ve ever used Gemini and then asked Siri to do the same tasks (and watched it fail), you’ll know the difference. The recent partnership between Google and Apple closes this gap. More importantly, it gives Apple the opportunity to offer its own unique AI experiences.

Broadly speaking, the underlying Gemini AI framework will improve Siri and Apple Intelligence. How exactly? That’s unclear because Apple won’t just copy and paste. You probably won’t see any obvious Gemini branding when they release these next-generation AI features on your iPhone.

Apple is just borrowing the brain. The body and behavior will be your usual Apple affair.

But if you compare what Gemini can already do on Android phones – and what Siri can’t – you can get a taste of the advances being made on your iPhone, iPad and Mac. You see, Apple doesn’t just use Gemini’s underlying AI technology for Apple Intelligence and Siri. It goes much deeper.

Apple will use Google’s AI toolkit for the next generation of Apple Foundation models. Think of these models as the secret that enables Apple Intelligence features like summaries, writing tools, image generation, and even cross-app actions.

Launched in 2024, these models can run either locally on a device (without an internet connection) or on Apple’s cloud servers. A year later, Apple introduced updated versions that were faster, handled media better, had better speech intelligibility, and offered support for more languages.

The big advantage was that the Foundation Models framework would allow developers to leverage this AI intelligence on the device and improve the user experience. Imagine opening Spotify and instead of doing manual work, you bring up Siri and give it a command like “Create a playlist of my most listened to songs this month.”

This is not yet possible on iPhones.

Another weakness is Siri’s inherent intelligence. Every time you ask a question beyond basic queries, they are offloaded to ChatGPT. With Gemini on Android devices like the Google Pixel 10 Pro, answers are offered instantly and tasks can be performed seamlessly in other apps.

For example, I can instruct Gemini to “send a text message to Saba asking her about her class status for the day on WhatsApp,” and Gemini will comply by texting my sister in the messaging app. Google’s Workspace apps and services are already well integrated, allowing users to complete tasks in Gmail, Calendar, Drive, and other services using voice commands.

Find information about a travel booking in my inbox, look up the contents of a file, or simply check the calendar schedule – Gemini does it all. Siri doesn’t come close to reaching this level of convenience. And this is where Gemini also comes to the rescue for Apple.

A whole new beginning

Apple clearly states that Gemini will power the “next generation of Apple Foundation Models.” This means Siri will be able to understand natural language commands better than in its current robot state, and will be able to complete tasks on the iPhone. There are many benefits that can come from this twin brain transplant.

The universal search system on an iPhone or Mac is improved and more conversational. Tasks in Apple products like Notes, Music, Mail, and more can be completed using voice or text commands without ever having to open those apps. And most importantly: also in other apps.

With App Intents, the company already has the framework to get the job done across third-party apps. It hasn’t quite caught on yet, probably because the available AI models were not considered intelligent enough by developers. With Gemini supporting on-device AI actions, more developers will confidently integrate AI-powered conversational actions into their apps.

Imagine Siri working for you across apps without ever opening those apps. On an iPhone you can already get a first impression of how it works. Open the ChatGPT app, enable app connectors, and use natural language commands to perform tasks in dozens of apps, including Apple Music.

But there is a caveat. You link another app (via login) to ChatGPT, which allows OpenAI to learn more about you. In theory, if the same task is performed at the operating system level using a built-in framework, the privacy risk is lower. Additionally, the entire workflow becomes more seamless.

Apple can mimic Google’s Gemini strategy in many other ways. It just needs to provide Siri in its own apps, but in a less intrusive way and more thoughtfully than the mindless shuffle we’ve seen with Copilot, Alexa+, and yes, Gemini itself. Apple is good at this part and I’m very excited to see how the company’s AI vision unfolds later this year.

There’s a lot Apple can learn simply from implementing Gemini on Android and on the web via Google services. And now that it has the Gemini brain at its fingertips, it can modify it and integrate it into its own apps and services – in typical Apple fashion.

The big question is, where does this leave ChatGPT, which is already the heart of Apple Intelligence? We’ll know more in the coming months, but more likely at Apple’s next developer conference in June. But for now, the future of Siri (and AI on Apple hardware) looks brighter than ever for an average user like me and you!

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments