Because this question reframes our task, now we only have to measure whether a player is doing something a fair human player couldn't have done.
As we've already have precise mouse movement and timing, and we can calculate what a player would be able to see and/or know at a given moment, we can calculate what they should have been able to do, how fast, and when.
And that's an issue that's common in many other industries outside of gaming, with everyone else having to come up with server-side solutions to the issue.
Reacting to a piece of information that you shouldn't have had access to at a given moment (whether it's insider trading or fog visibility improved by an ESP or a custom monitor LUT) is an easy to measure tell:
Mouse movements server side are not precise, they're sampled at a low-ish rate. There are ML solutions available but unclear if they're effective, based on how many cheaters there they don't see to be, yet. Then there's the question of false positives, seems like it cannot be a perfect system. Detecting humanness of players is unlikely to be precise enough, which is very different from detecting cheating software where you essentially catch the cheater red handed. This still has false positives but is typically reversed as the banned group report it, how can you do that when the AI/software detects that you're not human?
And to reiterate, it does not matter if they exceed human ability or not, the artificial increase of your ability is against the competitive spirit of these games. Might as well play against bots if everyone is cheating.