I wrote my master thesis on designing gesture interfaces (of the Xbox Kinect type) back in 2012, here's my two cents:
First of all, gesture detection still isn't reliable enough for serious input. Missing a beat one in a hundred times is acceptable in gaming settings (and even then only in more "casual" environments like party games), but for serious input the input device needs to be practically 100% reliable. A keyboard press is. A mouse click is. Detecting whether your hand is an open palm or a fist? Not so much. Hence the peripherals we typically see for VR games, which help of course. At the same time they also somewhat defeat the purpose.
A second major issue is a lack of haptic feedback. There is no such thing as touch-typing in the air.
Why this is such a big problem needs a bit of explaining: a practical way to think of our ability to manipulate our environment (literally, the manos in manipulate referring to our hands) is to think of them as a pair of kinematic chains[0][1]. Essentially this is a chain of ever-more finegrained "motors", going from coarse-grained to fine-grained precision: our shoulders, our elbows, our wrists, and finally the digits of our hands. The ingenuity of this chain is that it allows for extremely fine precision (the sub-millimeter precision of our fingertips) in large spatial volume (the reach of our arms), and it does so by having each "link" in the chain perform a bit of "error-correction" for the lower resolution of the previous link.
What does this have to do with gesture interfaces? Well, in order for that kinematic chain to work, it needs a precise feedback system to perform said error-correction. We basically have three senses for this: our visual system (that is, seeing where we are putting our hands), our haptic sense (feeling which button we're pressing with our finger-tips) and our "spatial sense". The problem with the latter sense is that is relative: I sense the sub-millimeter location of my fingers relative to my wrist. I sense the millimeter-precision location of my wrist relative to my elbow. I sense the centimeter-precise location of my elbow relative to my shoulder. So if I'm waving my hands in the air without looking, the effective "precision" they have is about as crude as the crudest link in the chain: my shoulder. Of course this spatial sense can be improved with training, but you know what we typically call people who are really good at that? Professional-level dancers. The ceiling of mastering this skill is pretty high, and there's a reason it's basically a profession all by itself (plus a ton of other things obviously, don't want to sell dancers short here).
Gesture input also will never be as easy on the motor skills as typing: not only does a keyboard provide the haptic feedback from the keys, the precision of my fingers is relative to the wrists that are resting on the desk, not to my shoulders.
Games somewhat get around this by representing a visual avatar to give us feedback, but it's not perfect. On top of that, this feedback is limited by the resolution of the gesture detection, which is ludicrously low compared to the potential precision of our limbs. And if that wasn't enough, it also needs a really low latency to fool our brains and really "feel" like an extension to our senses.
So basically, the fidelity requirements are just brutally high.
And finally, there is only a limited set of use-cases. There are basically just two big ones: "touchless" interfaces (very niche) and pointing and manipulating in 3D space (less niche, with a clear advantage over keyboard or even mouse input, but again having brutally high fidelity requirements). Because of that, as cool as gesture interfaces are, the industry-wide drive to solve all the aforementioned issues just isn't quite as high as we'd like it to be.