Alexa+ wants to run your life, but Amazon must prove its AI can be trusted

    By Nadeem Sarwar
Published September 23, 2025

Imagine talking to your Echo speaker, and it engages with a proper memory and information about your calendar and inbox. It helps plan and buy presents for the next party. Search through connected Ring cameras for delivered packages. Adjust the thermostat when you express how it’s sweltering hot. 

All that, and more, without even saying the “Alexa” hotword repeatedly. That’s just how Amazon evolved the old Alexa into the new Alexa+ assistant, and made it work somewhat like ChatGPT. Amazon quite literally gave it a “generative AI” brain transplant. Have a look:

I want all that convenience in my life. I’m just a bit nervous about the cost that comes with it.

But how do we make an AI assistant better? You feed it more data. Text, images, audio, video, and everything else that it can ingest. Collecting that data is a hassle because there is only so much material out there you can get your hands on. Legally, that is.

Otherwise, as Amazon-backed Anthropic recently found out, it could cost you $1.8 billion as a settlement to book authors — after just one lawsuit. There are a whole bunch of copyright lawsuits out there entangling giant AI corporations. So, what’s the next best step? Turn your users into willing contributors. 

No company commands that power in terms of sheer original data stream like Amazon. The source behind it all? Millions of Echo devices are in homes across the world. Perhaps, Google and Meta are the other key rivals, but they are nowhere near as intimate as the speaker sitting in your room, or the Fire TV in your lobby. 

Amazon needs that data, quite desperately, if it aims to make the next-gen Alexa+ assistant truly useful. That’s where things get a little murky for a lot of reasons. In March this year, Amazon announced a controversial policy change that disables local processing of your voice commands. 

In a nutshell, all your voice recordings must be sent to Amazon’s cloud servers. Starting March 28, whatever you speak to Echo speakers and smart displays will be transmitted to Amazon. Or, risk losing access to the Voice ID feature. Why? Here’s Amazon’s email sent to users a few months ago: 

“As we continue to expand Alexa’s capabilities with generative AI features that rely on the processing power of Amazon’s secure cloud, we have decided to no longer support this feature.”

I’ll get to the sordid history in a bit. But let’s go through the “why” first, from a technical perspective. Generative AI chatbots — think of Gemini, ChatGPT, and Alexa+ — are notoriously power hungry. They need a fairly powerful chip to handle tasks locally on the device. 

Only a select few smartphones out there can handle local AI processing, and Microsoft had to create an entire class called Copilot+ PCs that are capable of on-device AI chores. It’s hard to imagine Amazon putting such powerful hardware inside a $50 speaker, or a cheap smart display. Even Apple’s smartwatches, which cost north of $350, can’t do local AI processing. 

For offering Alexa+, at scale, there is simply no way to go fully local.

And that leaves Amazon – and everyone else — to offer generative AI features by sending all your commands and queries to a powerful server over an internet connection. This is where the conundrum begins. Will Amazon simply use your text and voice inputs for processing, or will it also keep the data for AI training? 

The latter situation is the biggest concern, and even more so given Amazon’s history. Anthropic, which has received roughly $8 billion in backing from Amazon, recently announced that user interactions with the Claude chatbot will be used for AI training. More importantly, the user data will be retained for five years. I am not sure if Amazon will do things any different.

Now, put that into Amazon’s context, which has far more personal data, such as your shopping history, video watching history, music listening habits, and voice chats, among others. History suggests that Amazon is not the best trust partner, and certainly not when it so desperately requires your data for AI training. 

Amazon settled a lawsuit after paying $23 million in 2025 for keeping children’s interactions with Alexa. In 2019, Amazon told a Senator — years after Echo hardware was launched — that it retains “voice recordings and transcripts until the customer chooses to delete them.”

The same year, Amazon admitted that its employees listen to and review a “small sample” of customers’ voice interactions with Alexa. Recordings from Echo devices have also been used in criminal trials. In 2023, the FTC sued Amazon over privacy violations for “allowing thousands of employees and contractors to watch video recordings of customers’ private spaces.”

Another technical challenge is the memory system for AI bots. For a more personalized experience, Alexa+ must retain a memory of previous conversations with users and, more importantly, personal information such as calendar, mail, shopping cart, etc. It’s one of the hot new promises for Amazon’s next-gen assistant, and a hot privacy red flag, as well.

Interestingly, Amazon seems to be struggling with its user-side implementation of Alexa+. TechGig’s tests note that the memory feature misfires even at the most basic tasks, such as saving a frequent flyer number. Amazon’s flashy demos also showed how Alexa+ can remember the spots in your house, or other places that it sees through the camera. 

That’s another two-fold problem. First, your video feed is sent to the cloud for processing. Second, you need expensive Amazon hardware with a built-in camera, or a connected device with a camera sensor. And let’s not forget that you need a $20 monthly fee to access Alexa+, unless you are a Prime subscriber. 

Amazon’s situation with Alexa+ is a tad tricky. The company needs to make it accessible, especially on low-end hardware. So many people buy the basic Echo speakers, which can cost as little as $50 but can’t ever offer the right hardware for Alexa+ processing, even if Amazon ever changes its mind and enables local processing again down the road. 

At the same time, Amazon has to keep affordable devices in the compatibility pool, which means Alexa+ will remain locked to the cloud processing route for the foreseeable future. With such a status quo, Amazon must come forward with more transparency on how Alexa+ will handle privacy.

The question is pertinent because Alexa+ can get work done across more platforms, including a whole bunch of third-party services such as Uber, Ticketmaster, and Grubhub. You can also share documents, emails, and photos for Alexa+ to remember and take appropriate action.

In a nutshell, the more you want to get done with Alexa+, the more you have to share with it. The only saving grace is that Alexa+ is not mandatory. You can still choose to stay on the Alexa bandwagon until Amazon pulls the plug on it. Or you simply can’t deal with it anymore after seeing what Alexa+ can do, and make the leap of faith.

Either way, the ball is on Amazon’s side of the court, and the company must prove that it can be the rare agent of good in the AI race. I am deeply skeptical, but still clinging on to slivers of hope.

Related Posts

Anker Solix E10 Whole-Home Backup System Review:  Power That Actually Fits Your Life

Whole-home backup power has always been one of those things that sounds great in theory, especially where I live.  Outages here aren’t rare, and when they happen, they’re more than just an inconvenience.  We’re on well water, which means when the power goes out, we lose access to water entirely. 

Catch up on news faster with Google Home’s new Gemini Live

Gemini for Home continues to expand its role across the smart home. When you ask for the latest news during a Live session, you now hear a connected rundown of events that ties developments together rather than listing isolated stories.

Alexa+ can now order your chow from Grubhub and Uber Eats with a human twist

Alexa finally learns how to take an order like a real person