Personally, I think there's a piece missing in the analogy. I understand that you can put some kind of human-verified mediator in between the LLM and the tool its calling to make sure the parameters are sane, but I also think you're modelling the LLM as a UI element that's generating the request when IMO it makes more sense to model the LLM as the user who is choosing how to interact with the UI elements that are generating the request.
In the context of web-request -> validator -> db query, the purpose of the validator is only to ensure that the request is safe, it doesn't care what the user chose to do as long as it's a reasonable action in the context of the app.
In the context of user -> LLM -> validator -> tool, the validator has to ensure that the request is safe, but the users intention can be changed at the LLM stage. If the user wanted to update a record, but the LLM decides to destroy it, the validator now has to have some way to understand the users initial intention to know whether or not the request is sane.