https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)
https://patents.google.com/patent/US5596656
If you scribble or your hand shakes and you mess up or brush the screen, it guesses wrong, and anything can happen, and you have no way of figuring out what happened and how to undo it, because it's invisible. The user has no intuitive understanding of how the black box of the gesture recognition system works, or how it might misinterpret their mistakes, and there is no self revealing of what gestures are possible, no prompting and leading, no feedback, no incremental disclosure of more information, no browsing, no changing, no canceling, no error correcting.
But pie menus have such an obvious simple crisp direct geometric tracking model (the direction between delimiting mouse click or touch/release events, regardless of the path between them), which users can easily understand. There is no mystery to why it picked one slice or another. Plus that also enables reselection (changing your mind or correcting misinterpretation) and browsing (pointing at successive items to highlight and reveal more information) and feedback (especially applying a preview of the item in the game or editor in real time as you browse the menu and adjust the distance parameter) and error prevention and correction, and increasing control of direction by moving out from the center to get more precise leverage, none of which gestures can support, all of which are useful.
Gesture Space:
https://donhopkins.medium.com/gesture-space-842e3cdc7102
Sibling Comment:
https://news.ycombinator.com/item?id=37907449
>>Excerpt About Gesture Space
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.