I don't think that's part of their decision making, Liquid Glass moved most things around for seemingly not much else than novelty and that's not the first time.
They have done this before, release something large early in anticipation of a major shift and iron out issues before the shift happens. Liquid Glass started off a little janky but they appear to have been ironing out initial issues with each update.
That doesn't change the fact that I can hardly read some of the user interface in Apple Music for example.
It's not that the idea is bad, but it's badly executed.
Nobody asked for a phone with fake buttons and a fragile wrap around screen.
Nobody asked for the UI to drastically change at random.
I wish smartphone companies would treat their products like they were completed devices with no innovation required. They are fully mature.
Instead, work on making them actually improved in ways that matter rather than trying to find “the next big thing.”
Be more like Toyota and less like Tesla.
I don’t think that cavalier attitude is universal at Apple and I don’t think the Siri PM wanted to break with their past respect for UX.
Liquid Glass was Apple’s logo change moment
The best is ChatGPT voice mode. It understands non English words and accents amazingly well, and even though the LLM model isn’t the full fledged one, I can have deep conversations with it for an hour without it missing a beat.
Does any voice assistant do this right now? Genuine question, I don't actually know. It sounds useful as long as it's not invasive.
Things that Sam Altman would prefer people not say lol
Just looked it up in my order history: I went from an "Echo Show 5 (1st Gen, 2019 release)" to a "Amazon Echo Show 8 (newest model)".
Whether I should have needed to upgrade is a separate question, but, yeah.
My preference, however, is for a voice-control UX just like I get with my Amazon Echo and "classic" Alexa like I have been for the past 10 years I've been using it: I think I can best describe it as a "voice-driven command-line" just like your OS' CLI shell, which makes its interactions predictable, even if it means I need to "know" what commands are valid in a given context. We all need predictability and reliability when it comes to my home-automation integrations.
...but computer interaction with a LLM / transformer-driven / "AI agent" is anything but predictable. When Amazon opted everyone into Alexa+ I agreed to give it a go and see if it really made things better or not - and it did not. I opted-out of Alexa+ and went back to something actually reliable.
Seems like an agent given 20-30 tool calls like "read_sms" "matter_command", and "send_email" would be able to work out what to do for things like "set the house to 72° and text Laura that I did it."
Incidentally, a major headline in the news this past week was about a coding-agent that wiped its company's entire system, including backups; which the company's staffers were confident was utterly impossible (as it didn't have any access to that system), and yet somehow, it did[1] (the TL;DR is the agent randomly came across an unprotected God-tier admin API-key/token saved to a personal text-file in a filesystem it had read-access to). If an agent can do that with only read-only access to a company's routine/everyday storage area then there's no way I'm giving it the ability to deactivate my house's fire-alarms and security-cameras via Google Home/Matter/Thread/HomeKit/X10/OhFfsNotAnotherCloudBasedAutomationScheme.
[1] https://www.theregister.com/2026/04/27/cursoropus_agent_snuf...
I'm on the iPhone 16 now.
You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.
... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.
Also, one of my first interactions with this Alexa+ thing was “how long is it until 8:45am”, one of only a few commands I use it for to work out how much sleep I’m getting, and it proceeded to ask me what the current time was… I immediately turned it off after that
I've had enough bad experiences with products that never got better, or just got worse (Exhibit A: Windows 11). Like most primates, I am capable of learning, and I've learned that once a consumer product/service goes bad there's little hope of a turn-around. I accept that you're telling me that it's gotten better, but of the people I know IRL who also use an Echo, none of them have told me that Alexa+ is worth trying, let alone committing to.
Yes, it's on me for not giving Alexa+ a second chance, but I'm not willing to give Alexa+ a second chance because, as a technology product/service customer, I just don't feel respected by the industry I work for (...lol); if Amazon, Microsoft, Google, et al won't respect me, why should I venture outside my comfort-zone for... what benefit, exactly?
The new Alexa powered by an LLM is objectively better that previous Alexa in a few ways. This much was apparently from day one and has only gotten smoother.
1. It can reliably execute direct or vague-ish commands "play X movie in app Y" or "play x show" and can infer X movie is only available in app Z so use that.
2. Speech recognition seems better (less instances of 5x round trips)
3. Conversational with multi-turn --- my wife can have a back and forth clarifying a topic.
4. Seems to understand intent a bit better. (user asked A so they are probably thinking about B)
Those may seem small but they were a tremendous source of annoyance for her -- and thus for me -- "Alexa is not listening, do something!"
I ruined multiple dinners with timers that didn't work (with a time/labor cost).
I had to get out of bed in the freezing to turn the lights out. It's easy to hit the lights when I go to bed but annoying having the tool fail and getting back out.
Music stuff didn't work well because I used Youtube Music not Spotify.
Those were my 3 use cases for Google voice, and it failed them all enough I just stopped using it all together. Who cares if it works today if in another month they just change something and break it again? They've shown it's not a tool to use for tool things, it's a 'gee wow' thing. I don't need to be impressed. I need not burnt food.
I do like Gemini better than Assistant, even though it's not quite there yet. But that's just a matter of time because they actually designed it from the ground up to be a drop in replacement for Assistant.
But for one-on-one, it is a really outstanding experience. Especially since they tamped down the way over-the-top humanisms.
The first problem is that it's just slow. If I want it to turn off some light, it takes a long time before responding.
But yeah, the failure to do basic tasks. I have a routine that I used to have it run (controls several devices at once). Now:
10-20% of the time it runs it.
60% of the time it says it's running it but it doesn't do anything.
20-30% of the time it says it can't do it unless I opt in to invasive permissions. And when I opted into them, it still failed about a third of the time. So I opted out again.
Man, I hate touch screens. And I hate Android Auto. My previous car had an aftermarket Bluetooth system (radio, etc). It was way, way better than Android Auto or any entertainment system I've seen in any car.
I have never had trouble setting timers with either.
It is much better today than 3 months ago.
But timers and smart home actions are definitely less reliable and sometimes take absurdly long to respond (like 20-30 seconds p99).
To give you an example, I was having coffee the other morning while unloading the dishwasher and asked the speaker if today was a good day to apply weed and feed on my lawn. This was not possible with the old assistant and was useful to me.
And now if I want to use Gemini on my phone I have to replace Assistant. Nah, I'll keep Assistant thanks, and just have a shortcut to load the Gemini in the browser.
Except the browser experience is so fucking buggy, constant reloads needed..
WhisprFlow produces much better speech-to-text for long text messaging-by-voice (dictation / transcription) than apple's speech-to-text does. Whisper models in general seem to do a lot better than most built-into-OS/app models. Which is interesting, because there's nothing stopping them from just using Whisper models.
I love MacWhisper personally. Also, Gumroad is a fantastic app distribution platform for my personal values.
https://goodsnooze.gumroad.com/l/macwhisper
As far the "decision tree" side ... there's not much that can be done about that now. Agentic agents still go too "off-the-rails" to be productionized out to the billions of smartphones of the world. I'm working on voice-controlled agentic-with-rails AI features for my HomeAssistant, because Alexa / Google Home suck. But that's a hobby project and rogue AI actions only affect me, not billions of customers.
Still love not having google's paws all over my data, though, so not going back.
Any of the Whisper-based apps on the App Store.
(It misunderstands my wife from California all the time, though.)
So if you buy Apple products based on that value proposition it’s a big problem for Apple if they can’t seem to keep their brand-promise in this area.