I think there's a line worth drawing here: Pre-LLM voice interfaces required you to guess the command(s) the designer of the thinking were having in mind for the action you want to perform. With LLMs you can be 10ft into human-level vaguery and metaphorism and your intent might still survive.
So the difference wrt discovery is that you only have to gesture at what you wanna do and, if a matching action exists, there is a chance it will be understood.
I'd wager we'll see a renaissance of voice assistants with LLMs, especially once the good-enough ones can run on device.