I'm struggling to figure out a plausible scenario in which a model like this would be rendering judgment on employment suitability based on gender. Such a scenario would have to be deliberately created - models like these give you the answers to questions you ask. If you choose questions that lead to gender bias, it's not a problem with the model.
The whole scenario is contrived and not relevant to the functionality of these language models. It's like complaining that your Formula 1 car doesn't have a snowplow mount. Even if you add one, that's not how you should be using the tool.
The models use human generated text. They model human biases, like preferences for well being, humor, racism, sexism, and intelligence or ignorance. The ability to generate biased output is also the ability to recognize bias. It's up to the prompt engineer to develop a methodology that selects against bias.
You can use prompts to review the output - is this answer biased? Sexist? Racist? Hurtful? Shallow? Create a set of 100 questions that methodically seek potential bias and negative affect, and you could well arrive at output that is more rigorously fair and explained than most humans could accomplish in the casual execution of whatever task you're automating.
Zero-shot inference is a starting point - much the same way people shouldn't blurt out whatever first leaps to mind, meaningful output will require multiple passes.