Short answer is: we can't and we don't. Most EULAs explicitly prevent users to benchmark results, and we don't want to incur into any such risk. Plus, since we develop a competing product, any "deep look" into the competition might be seen as reverse engineering it, and our company is very careful to avoid such problems.
Our company has dedicated teams to evaluate competition products, so we once asked them (a couple of years ago), and could only look at aggregated, anonymized results. But the patterns were very clear. Anecdotical experience (mostly coming from customers of ours who, themselves, compare our internal engine with alternatives) seemed to point to the fact that most of the competition have rather stable service, so quality likely didn't evolve much in the last two years, but we can't be sure of course.
We constantly track our own accuracy on internally developed benchmarks, because frankly the ones available online (also for research purposes) are very bad. But as said, we can only continuously test our own engine and open source ones (like Tesseract), for legal reasons.