Thousands of pages is pretty good and what I’m coming to expect on the low side for cheap (single consumer GPU or NPU) throughput with the 5…8GB models now. Heck, with some of the optimizations that Llama.cpp has made, with SafeTensors and GGUF, you can reduce the actual memory usage down.
A cheap Mac mini with apple’s neural cores is good enough that it roleplays smut with a human at human speed. We’re going to see a rapid increase in throughput to price. We’ve already got small LLMs that run on mobile phones.