I think the difference stated above, was that for messaging, calls and other phone related activities it’s not sent. I’m not sure you can say the same for Google.
On Pixel 4, 4a and 5, assistant does most queries on devices with the Neural processor. I wouldn't be surprised if they bring that capability to speakers too eventually. Though that increases the price of course. HomePod Mini is equivalent to Google Mini or Echo Dot but costs 3x the price. Maybe it's all the Apple Tax but my guess is that they had to put a pretty strong processor to do NLP on-device.
My interpretation of that was your voice query is still sent to Apple and interpreted as whatever text/command, but the contents of the text message are not pulled from iCloud, instead they are pulled from within your local network directly from your iPhone. So all of your requests still do get processed in the cloud on Apple servers, but they don’t wind up with copies of the content of your messages (unless of course you enable iCloud back ups or iMessage cloud sync). This is definitely an improvement versus how things work on the Google side, but it’s a far cry from nothing at all getting sent to Apple.
The problem is that Google's voice recognition results in false positives. I've seen a few recordings on my Google privacy page with nothing but ambient noise from my apartment.
Well to my knowledge, Apple doesn't even have a similar page; That instills my confidence in Google rather than Apple, as I know for sure Apple can't be foolproof with false positives