My point is, I don’t think any comparative benchmark would ever exclude something based on “oh it’s just 10%, who cares.” I think the issue is more that Apple Vision Framework is not well known as an OCR option, but maybe it’s starting to change.
And another part of the irony is that Apple’s framework probably gets way more real world usage in practice than most of the tools in that benchmark.