If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.
Caching is also generally managed via HTTP headers, and you also have control over them.
Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.
I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.
Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.
And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.
I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.
If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.
And there isn’t any good reason not to contract pen testers to check over everything afterwards.
Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)
> And there isn’t any good reason not to contract pen testers to check over everything afterwards.
I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.