I love Google Gemini, but I’ll take Apple Intelligence any day of the week

    By Nirave Gondhia
Published January 20, 2025

If you’re looking for the best AI experience on a phone, chances are two different AI makers come to mind. For the iPhone 16, Apple Intelligence is the answer, while for Pixel 9 series — and the best Android phones — it’s Google’s Gemini. Of course, you can also download Gemini as a standalone app on the iPhone, but Apple Intelligence is the default AI option.

Both companies offer a range of nearly identical features, at least in what they promise to offer, but there are also nuanced differences. Google Gemini is mostly focused on using AI to help you create, edit, and generate content. In contrast, Apple Intelligence focuses more on personal use cases and integration across a range of apps.

I’ve been using both Gemini and Apple Intelligence for months and both AI services have their pros and cons. After testing both for months, here’s what I’ve found.

Activating either AI platform is fairly intuitive, especially if you’ve used an Android phone or an iPhone before. Gemini replaces Google Assistant as the default assistant on your phone,  although you can disable this which you may want to do, especially if you rely on Google Assistant for your smart home. Gemini can be activated most commonly with a swipe from the bottom corner of the screen, although it’s also available via the “Hey Google” hot word.

Similarly, Apple Intelligence is baked into the revamped Siri which can be activated using the “Hey Siri” codeword or by double pressing the power button. When you activate the new Siri, you’ll get a rainbow-style lighting effect around the edge of the entire display, instead of Siri taking over the entire display as it did in the previous generation.

Both are simple to activate and use, so this one’s a tie. I do think that Gemini is simpler to use and activate, especially as there are multiple ways to activate it, but conversely, the activation method can change between different Android phones.

Both platforms focus on using AI for three specific purposes: generative features — such as creating and editing images or text — as well as productivity features and a voice assistant. The former is the key focus for most AI makers but I’ve often found that generative AI features can be somewhat of a gimmick. Yes, they’ll create great memes, but they likely won’t change your life.

Both platforms allow you to edit images you’ve already captured to remove unwanted objects. Google has had this built into Magic Editor in Google Photos for several years, while iOS 18 natively brings this feature to the iPhone for the first time in the redesigned Apple Photos app.

Take this image a friend took of me the morning after an intense night out. I asked both phones to remove the menu on the table and the results are fairly interesting.

First, it’s immediately obvious that Apple Intelligence isn’t as good as Gemini, as you can see the wood grain in the tabletop is angled in the same direction as the menu. That aside, Apple Intelligence does a great job at filling in the grains and ensuring that there’s continuity in the before and after of the same photo.

What about Google Gemini? Here’s where Google’s longer history comes into effect: it’s better by a considerable amount. First, it generates four different images for you to choose from. Second, it has more precision in allowing you to refine your selection before making an edit. However, at the same time, it takes more taps to get to the Magic Editor, and unless they’ve used Google Photos in the past, an average person will probably find Apple Photos more intuitive.

Google’s focus with Gemini is mostly on generative features, as well as making it a replacement for Google Assistant. It achieves the former extremely well, whilst it still needs some work as a true replacement for Google Assistant, especially if you need it for smart home controls.

Gemini comes with a range of features that I enjoy using, especially Circle to Search, which debuted last year on the Galaxy S24 series and makes it effortless to search related to something on your display. Want to know where to buy shoes that you just saw on Instagram? Circle to search can look that up in seconds.

Meanwhile, Apple Intelligence takes a different approach. It features many of the same generative features — except for a true Circle to Search replacement — but it is also made to be your assistant. Whichever app you’re using, Apple Intelligence can edit, rewrite, or summarize text for you, which makes it particularly poignant when you use a variety of apps.

There is also one other key difference between them: the models that they use.

If you used Siri before the rollout of Apple Intelligence, you’ll know that it was not as good as Google Assistant; it wasn’t even close. With that in mind, it almost always felt inevitable that Apple would turn to another provider for the underlying models that power Apple Intelligence.

Google already pays Apple to be the default search engine on the iPhone — something to the tune of almost $20 billion per year — so it’s somewhat surprising that Apple turned to ChatGPT to provide the underlying models for Apple Intelligence.

This integration goes much further, and where the new Siri is unable to help, Apple has integrated ChatGPT as the default backup. This means there are some duplicate features — you can generate images using Image Playground or ChatGPT and the same applies to some of the Writing Tools — but it also means you have a vast array of information and data to work from. If you have a ChatGPT free or paid account, you can access even more features directly within Apple Intelligence.

Comparatively, Google opts for a self-contained approach. The underlying model behind Gemini is Gemini Advanced, with Gemini 1.5 Pro in particular the current non-beta model. If you access Gemini via the web, you can also select the next-generation Gemini 2.0 model.

One of the key differences between these two models is that Gemini 1.5 has a larger context window, while ChatGPT tends to be better at generating human-like text. Both apps allow you to build custom chatbots, but ChatGPT also offers more advanced features, and Plus or Enterprise users can create unlimited chatbots.

One somewhat irritating thing about Apple Intelligence is that it doesn’t use the latest GPT-4 model, which is far more advanced and capable. It’s unclear whether Apple will roll this out at a later date, or it’ll be built into the next version of Apple Intelligence, but this is something I’d like to see Apple integrate into Apple Intelligence. GPT-4 has a much fresher set of data to work with, which poses a challenge for the knowledge base that Apple Intelligence is working with.

For example, I asked Apple Intelligence and Gemini who won the U.S. elections, and it generated an answer related to the 2020 election. After clarifying I meant the 2024 election, it gave me Google Search results. In this case, it was actually better than Gemini — which won’t discuss elections — but this is an edge case and Gemini is regularly more accurate at recalling information than Apple Intelligence.

For everything that Gemini is great for, there’s one feature that Apple Intelligence gets right. In deciding how to make a useful AI, Apple focused on its ability to affect and improve your personal life, and Apple Intelligence is far better than Gemini at this.

I’ve already written that Notification Summaries are my favorite use of AI right now, but Apple Intelligence extends beyond that. Being able to call up Writing Tools — to compose, refine, or edit text — in any app is far better than Gemini which acts like an overlay to that app. Similarly, you’ll soon be able to recall information from any app, which should make the new Siri a far better personal assistant.

I’ve been using both platforms for months and answering this question is harder than I first expected. On the one hand, Google Gemini is a far better generative AI solution and has access to a much broader knowledge base than Apple Intelligence. On the other hand, Apple Intelligence is a much better personal assistant and has better integration with Apple devices.

Then there’s the long-term potential for each of these platforms. Google Gemini is the default AI provider underpinning the AI suite on most Android devices, while Apple Intelligence is focused solely on Apple devices but benefits from improvements made by ChatGPT (at least once it’s running the latest models).

All things considered, I’ve found that while Gemini is far more advanced than Apple Intelligence, it’s the latter’s focus on personal features that ensures I use it more often. When I want to search for something or edit a photo, I turn to Gemini, but for daily use, I find Apple Intelligence — and in particular the Notification Summaries — to be far more beneficial for daily life. That said, Gemini is undoubtedly the better AI platform, at least for now.

Related Posts

OnePlus 15T leak spills details on a curious camera situation

According to the Chinese tipster Digital Chat Station (via Weibo), a "small-screen phone powered by the Snapdragon 8E5 is ready," translated from simplified Chinese. This phone, believed to be the OnePlus 15T, could feature a dual-camera setup "with a 50MP main sensor and a 50MP telephoto lens."

WhatsApp has begun testing a long-overdue group chat feature

The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.

Google Photos introduces a fun new way to turn yourself into a meme

According to a recent post on Google's support forums, Me Meme is a generative AI feature that lets you star in trending memes using a template and a photo of yourself. It's rolling out in Google Photos for Android in the US, and you can try it out by tapping the "Create" button and selecting the new "Me meme" option.