Google showed me its AI future for Google Home, and it blew me away

    By Joe Maring
Published August 6, 2024

Google’s making a few announcements today ahead of its big Pixel event next Tuesday. In addition to revealing the new Nest Learning Thermostat and the Google TV Streamer, Google is also providing a sneak peek at some big Google Home and Google Assistant changes. And they’re all really impressive.

We’ll start with the Google Assistant. Google has revealed a new voice for the Assistant, and it sounds significantly more natural than the current one. It’s difficult to describe in writing, but the gist is that the Assistant’s voice now sounds more like a human and less like a robot. The Assistant takes natural pauses while speaking and has inflections in its voice.

Additionally, the Google Assistant is getting better and more natural with follow-up questions. In a demo video I saw, someone asks the Google Assistant if Pluto is still a planet. The Assistant explains that it is not and that the International Astronomical Union (the IAU) decided to reclassify Pluto as a dwarf planet. The person then simply asks, “Could they change their minds again?” The Assistant knows that “they” are the IAU and that the person is asking if the organization could change its mind about Pluto being a dwarf planet.

As cool as this all is, the really exciting stuff has to do with Google Home. Google showcased its plans for bringing Gemini into the Google Home experience, and even as someone who’s not been particularly impressed with existing Gemini features, the stuff it’s adding to Google Home is pretty jaw-dropping.

My favorite Gemini feature is how you can use it to create automations. Automations are an important part of any smart home, but they’re also not particularly easy to set up. Having the lights automatically turn on when you get home is great, but setting that up yourself can be easier said than done.

With Gemini, you’ll be able to create automations by simply saying or writing what you want your automation to do. In an example, Google shows someone using Gemini in the Google Home app and saying, “Help the kids remember to put their bikes in the garage when they come home from school.” Using that, Gemini creates an automation that will turn on the garage lights and broadcast a message with a reminder to put away bikes whenever someone arrives home between 3:30 p.m. and 5 p.m. You can then tap a button to see the full automation process and customize it if you want to. Otherwise, you tap another button to save it, and that’s all there is to it.

Gemini is also going to make searching your camera activity a lot easier. Using the same bike example, you could go to the Activity page in Google Home and search, “Did the kids leave their bikes in the driveway?” You then get a clear answer at the top, followed by video clips Gemini pulled its answer from. It sounds simple explained this way, but the technical process happening behind the scenes to make this look so seamless is nothing short of amazing.

This is all possible because of how Gemini will greatly improve the quality and detail of what your smart-home cameras see. For example, as it stands today, a Nest camera looking at your backyard can give you an alert if it sees a bird on your bird feeder and knows to classify that as an animal. With Gemini, however, it could provide a much more in-depth explanation of the scene, such as:

“A blue jay at a seed-filled feeder. Its blue and white feathers vibrant against a dull, wintry backdrop. There are no people or vehicles, just tranquil natural scenery and the colorful bird.”

While it remains to be seen how all of this works in the real world compared to pre-rendered demos in a press briefing, everything Google is showing here looks incredible. It often feels like Google announces Gemini features without a clear explanation of how they’re supposed to make your life easier, but that’s not the case here. Using Gemini to create automations is ingenious and something I can’t wait to try. The upgraded Google Assistant sounds fantastic. The new AI tools for Nest cameras are like something straight out of the future.

Now, the important question: When can you use all of these features for yourself? Google says it’ll begin rolling everything out to Nest Aware subscribers in a Public Preview phase later this year. The exact timing is unclear, but I certainly hope it’s sooner rather than later. Google is onto something magical here, and I can’t wait to get my hands on all of it.

Related Posts

OnePlus 15T leak spills details on a curious camera situation

According to the Chinese tipster Digital Chat Station (via Weibo), a "small-screen phone powered by the Snapdragon 8E5 is ready," translated from simplified Chinese. This phone, believed to be the OnePlus 15T, could feature a dual-camera setup "with a 50MP main sensor and a 50MP telephoto lens."

WhatsApp has begun testing a long-overdue group chat feature

The Meta-owned messaging platform is testing a new feature called "group chat history sharing" (via a WABetaInfo report). As the name suggests, the feature lets a WhatsApp user (likely the admin) share the chat history (up to 100 messages sent within 14 days) with someone while adding them to a group.

Google Photos introduces a fun new way to turn yourself into a meme

According to a recent post on Google's support forums, Me Meme is a generative AI feature that lets you star in trending memes using a template and a photo of yourself. It's rolling out in Google Photos for Android in the US, and you can try it out by tapping the "Create" button and selecting the new "Me meme" option.