I put Google Gemini on my iPhone. Here’s why I’ll never go back to Siri

News Room

The AI frenzy has gripped every smartphone maker in 2025. Unfortunately, not all of it has been as revolutionary as on-stage presentations would have you believe. A few, however, have done a fantastic job with executing practical AI features.

Google is one of those names, and it continues to do so even on iPhones — at the cost of making Siri look like an outdated relic. The latest build of Google’s Gemini app for iPhones puts the AI chatbot everywhere on Apple’s smartphones, from the lock screen to the share sheet.

That’s a deep integration, even though Apple would likely never let me replace Siri with Google’s alternative on my iPhone 16 Pro. At the moment, the status of Siri is so bad that its promised AI overhaul might take until 2027 to arrive. In the meanwhile, it has to rely on an awkward offload system with ChatGPT when it can’t handle a complex query.

Enter Google Gemini’s latest avatar on the iPhone. Months after making its way to iPhones, and fully replacing the Google Assistant, Gemini has finally attained a form where I won’t need to look elsewhere. For example, I can now directly upload files or links from the Lock Screen, or share material from within any app straight into Gemini

Going far beyond Siri

For me, or any person with a smartphone in their hands, convenience is what matters the most when it comes to getting tasks done using an AI tool. With the latest app update, I can access Gemini from the Lock Screen, and in a total of six formats through dedicated widgets.

I can trigger the text chat, launch the Gemini audio mode for mundane tasks such as setting reminders, or pull up the fantastic Gemini Live mode for a free-flowing conversation. These widgets work better than Siri for two reasons.

First, if I ask Siri something like “tell me about the light speed paradox,” the assistant won’t be able to answer it, because that’s beyond its capabilities. For such queries, it will ask me whether ChatGPT can take over. That’s a needless friction.

ChatGPT controls on iPhone.

Moreover, Siri will only ask me that question if I’ve enabled the ChatGPT integration from within the settings app. That’s another layer of friction. Furthermore, I have to enable two separate toggles in the Apple Intelligence dashboard to enable this Siri-to-ChatGPT answer flow.

Assuming I have the ChatGPT tools ready to go, there’s another fundamental hassle. The answer I get is an extremely condensed version fitted into a small dropdown box.

I can’t see any of the sources where the information was pulled from, either. If I seek a more detailed answer, I will have to launch the ChatGPT app.

ChatGPT providing a response after Siri.

With Gemini, the flow is quite easy. I can tap either the chat, or voice mode widget on the lock screen and narrate my question. Once there, I directly land in the Gemini app, where the answer is already being compiled and read out to me (if I picked voice instead of text chat).

In the Gemini mobile app, I can find a far more detailed breakdown of my answer, with clean headings and easy-to-comprehend bullet points. While that is already a fulfilling experience of interacting with a chatbot, the Gemini app offers a bunch of useful controls, as well.

Tools available for Gemini chat.

With a single tap, I can find related queries that other internet users pushed on Google Search, export the long answer as a Doc file, create a shareable link, put it in Gmail draft, and modify the answer in five separate ways.

I like my answers concise, so I often pick that option. As a journalist, research and fact-checking is the most important part of my job. Getting my answers from the Siri-ChatGPT nexus won’t offer me that facility.

Gemini response with web link on iPhone.

But if I launch Gemini from the Lock Screen, a single tap is all I need to check the source of the information offered by the chatbot and land directly on a Google Search result interface.

File analysis is a cakewalk

Gemini’s file analysis capabilities are a life-saver, and if you rely on Google’s Files app, it will kick into action automatically. While using Siri, I kept running into software walls, when all I needed with an AI assistant was to make things as simple as possible.

File analysis feature with Gemini.

For example, let’s say I want the AI to break down the core findings of a 50-page research paper. Siri can’t do that. And even with its ChatGPT integration enabled, when you launch the assistant, there is no option to upload a file of any kind.

All you get is a text box or a mic for voice chat. To get the task done, I have to install the ChatGPT app and launch it every time I need its multimodal capabilities to lend a hand.

With Gemini, I simply tap a button on the Lock Screen and land straight in the file picker window. Once the file uploads, I can type my query and have the AI explain or find things for me. If I need to make sense of an image or screenshot, there’s a dedicated “Gemini Image” Lock Screen widget available for me.

Gemini image scan and response.

Finally, we have the dedicated camera view for Gemini, as well. Just like Visual Intelligence, all I need to do is tap on the Lock Screen widget and land in the camera preview. A shutter click later, Gemini is ready to process and answer queries based on what the camera captured.

On the iPhone, Visual Intelligence can be launched with a long-press of the new Camera Control button. On older iPhones, I need to customize the Action Button to launch it, or tap on the Visual Intelligence icon after pulling down the Control Center.

Once again, Gemini offers more versatility than Siri or Apple Intelligence. In the Control Center, there are a total of six Gemini shortcuts, the same pool you get for the Lock Screen widgets. I love this flexibility, because if I can’t squeeze all the Gemini widgets on the Lock Screen, I can simply put the rest in the Control Center.

Quick access is not a problem

I recently had a discussion with Digital Trends vertan and Apple expert, Jesse Hollington, about the possibility of Gemini replacing Siri. He mentioned that it’s a pipedream because Apple would never sideline one of its own products, even if it’s worse. In a nutshell, Gemini will never become the default AI assistant on an iPhone.

Gemini quick access options in Control Center.

Thankfully, Apple’s own design choices will let you trigger Gemini with the same kind of ease as Siri. A long press on the power button brings up the Siri interface. But thanks to Apple’s recent hardware updates, you can skip the power button route and customize the Action Button to launch Gemini.

Thankfully, it’s a fairly straightforward process. Simply launch the Action Button page in the Settings app, swipe left till you find the Shortcuts menu, and select Gemini from the app list.

What I like the most is that I have a choice to launch Gemini in text, or voice mode. Or simply skip to the uber-chatty Gemini Live mode for an eerily human-like conversation in the AI voice of my choice.

Gemini quick launch controls for Action Button.

With Siri, a long-press on the power button summons the voice interaction mode. To open Siri in text mode, you need to long-press on the navigation bar at the bottom of the screen. It’s funny that you get a more cohesive and consistent experience with Gemini on an iPhone, than Siri itself.

Of course, talking with a chatbot is more convenient and immersive than typing away a long query. For such conversations, Gemini is noticeably better, especially with the Gemini Live mode, compared to Siri.

Moreover, even for non-paying users, Gemini is now able to save details from previous conversations and recall them contextually in the future. This capability makes chatbot conversations a lot more natural. Think of it like an artificial memory for an artificial brain.

Invoking Siri on iPhone.

Siri doesn’t offer that convenience, and it might take a while before that happens. As per Bloomberg, that status quo won’t change until 2026, and might even stretch into 2027, before the generative AI overhaul for Siri arrives.

The Gemini gems

The biggest advantage of Siri is that it is integrated with iOS at an OS-level. Yet, nothing too ambitious or fruitful has come out of that deep integration. Siri can’t interact with third-party apps, yet. It is still heavily reliant on ChatGPT for everything that one expects from a digital assistant in the generative AI era.

Google Gemini getting work done across two services with a single prompt.

On the other hand, we have Gemini, which can already interact with Google’s suite of apps, and even third-party options such as Spotify. At the heart of this system is extensions, and it’s only going to expand in the near future.

For anyone who works across Google tools such as Docs and Maps, Gemini offers a seamless ticket. The ChatGPT app is not exactly a slouch, as it offers its own set of benefits, but none that go far and beyond what Gemini can accomplish.

With a ChatGPT Plus subscription that costs $20 per month, you get higher file upload limits, advanced voice mode, access to latest AI models, Deep Research, and the ability to create custom GPT agents. Gemini offers all those facilities, and then some more, for the same price.

Gemini Lock Screen widget for iPhone.

If you get a Gemini Advanced subscription via the Google One AI Premium plan, you get access to a handful of extra perks such as 2TB of cloud storage, the latest Gemini 2.0 series models, NotebookLM Plus access, and a rewarding integration across all Google products.

Deep Research, in particular, is my favourite. With a single click, I can export the entire research material into a Doc file, and turn it into an immersive podcast using Notebook LM without ever leaving the Google ecosystem.

The sum total of the arguments here is that Gemini not only offers better value over Siri, both quantitative and qualitative, but also ease of access.

For a tool developed with the sole intent of making our lives easier, it’s a fundamental win. It’s just a shame that a Google tool is doing it better on iPhones than Apple’s once-pioneering AI assistant.






Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *