Google will make you subscribe to use the Pixel 9's new features

Somehow, against all odds, Google seems to be making its hardware worse as well.

Google will make you subscribe to use the Pixel 9's new features

BTTR is independent, but we may earn money when you purchase through links on our site.

Google rolled out its next version of Pixel smartphones overnight. Like any new smartphone launch, there's a better processor, camera improvements, and a bunch of software improvements to help the phone do more.

But the most interesting part of the launch announcement for me was Google's push for Gemini, its AI digital assistant. If you opt for the pricier Pixel 9 Pro or Pixel Fold models, Google's bundling in one year of its Google AI Premium subscription with the purchase.

After that, if you want to use many of the AI features Google is promoting the Pixel 9 and Pixel Fold phones are capable of delivering, you're going to need to pony up $32.99 per month.

That subscription cost does include a couple of terabytes of cloud storage you can use to backup your phone and your photos. But you could also get that amount of storage from Dropbox for under $19 per month.

So what do you get for that extra $14 per month? From what I've seen, a whole lot of disappointment.

Gemini AI features for the Pixel 9

First off, let's look at the AI functions spoken about that don't require a subscription:

  • Pixel Screenshots: Lets you use AI to keep tabs of your life via screenshots and Google's AI will be able to use data from those shots to converse with you later.
  • AI photo editor: Use text prompts to have Google edit the image using AI. The extra arms it creates will be a feature, not a bug.
  • Add me: Take a photo of your mates, then swap position with one of them and have them take a photo of you, and Google will stitch it all together so you have a full group shot. Sounds kind of cool.
  • Pixel Studio: Create illustrations, just like on the Samsung Galaxy foldables. I was hugely underwhelmed by that, so I doubt this will be much better.
  • AI weather summaries: I've never found weather to be overly complicated to need a summary, but okay.
  • Call notes: Google will record a phone conversation and then summarise it in private notes later. It notifies all people on the call and it never leaves the device, apparently. I can definitely see this being useful, though I still have privacy concerns.

But what about Gemini itself?

Google's language around Gemini in the Pixel 9 family of phones is that it will help you "Supercharge your ideas" or "Supercharge your productivity". But what does that even mean?

Looking at the AI Premium subscription page, it looks like having Gemini Advanced gives you access to the AI platform in Google Docs.

This means you can use it to write documents for you, create slides for you and make up completely random shit for you without having to go to a separate platform.

I am, of course, being a little dramatic. But I want to flag that Gemini, like all other LLMs, have no way of differentiating fact from fiction and will happily hallucinate an answer just to give a response.

Even in Google's announcement of the features of its new phones, it has a footnote around its Gemini features that puts the onus back on the user to "Check responses for accuracy".

This is a service Google is expecting users to pay for.

Overlays and Gemini Live

As part of the Pixel 9 announcement, Google announced that Gemini has been more deeply integrated into the Android operating system. This means that it can now react over the top of any app your using and engage with it. I personally can't wait to read how the porn industry takes advantage of that.

But deeper than that and more tightly connected to the premium subscription is what Google is calling Gemini Live. In Google's words, "Gemini Live offers a mobile conversational experience that lets you chat with Gemini about whatever’s on your mind."

Gemini Live is only going to be available to Gemini Advanced customers – people paying the subscription. From the early reports, the digital assistant sounds much more lifelike and conversational than early digital assistants, but also won't shut up. It will continue to talk until you interrupt it.

It's early days, so maybe over time this will become smarter and more tuned to how real conversations actually work. The idea of striving for an assistant like Iron Man's J.A.R.V.I.S. excites the nerd in me, but I'm all too aware of AI's current failings to be too excited.

Until I have even a base confidence that AI tool like Gemini can differentiate the truth from make believe, I would put very little trust in what it tells me, or the information it spits out. Almost every time I've used Gemini or GhatGPT, there has been some complete hallucination that I have had to correct.

The real problem

Look, I make no secret that I largely feel that AI's performance is underwhelming for the majority of use-cases it can deliver, especially with the fact that nothing it says can be trusted.

But my issue with Google's Pixel 9 announcement today isn't that it is charging for AI features. My fear is that Google is trying to turn your phone from a hardware device into a subscription service.

Currently, from what I can see, it's only the integration with your Google Docs and Gemini Live features that require the AI Premium subscription. Things like Circle to Search and editing out your ex from a photo can still be done without a sub.

But with Google pushing so hard to ingratiate Gemini with the Android operating system, how long until even those core AI functions of your phone become part of the subscription? We know Google has been making its search results worse to maximise profits, which is part of the reason it was found to be an illegal monopoly.

It's also a way for Google to force revenue from other Android devices. Part of the launch event was Google showcasing Gemini working on Samsung and Motorola phones... What if you need to pay Google a subscription to unlock core functions on devices from other manufacturers?

That's speculation, but I've been writing about tech for long enough to say that I don't think this is a question of "if", but "when".