I've found LLMs to be extremely useful for writing helper scripts (debugger enhancements, analyzing large disassembly dumps, that sort of thing) and as a next-level source code search. Take a large code base and you get an error in component A involving a type in component B and it's not immediately obvious how the two are connected. I've had great success in giving an LLM the error message and access to the code and asking it how B got to A and why that's an error. This is something I could certainly do myself, but there are times when it would take 100x longer.
The key is that these are all things I can verify without much difficulty: read over the script, spot-check the analysis, look at the claimed connection between A and B and see if it's real. And I don't really care about style, quality, maintainability.
You certainly can run this locally, but anything that will fit into reasonable local hardware won't be as good.
I don't need to trust it to process as it says it does, because I'm verifying the output.
And as far as I'm concerned, "needs privacy" is always true. I don't care if the code will be open source. I don't care if it's analyzing existing code that's already open source. Other people have no business seeing what I'm doing unless I explicitly allow it. In any case, I work on a lot of proprietary code as well, and my employer would be most displeased if I were exposing it to others.