Source?
I assume the seeking implies a significantly faster way to skip the first N bytes of the decompressed output without the actual decompression. "Significant" is here defined as a time complexity strictly less than linear time in the size of compressed or decompressed data in that skip, so for the RLE sequence like `10a90b`, seeking to the 50th byte should be able to avoid processing `10a` at all; reading `10a` but not actually emitting 10 copies of `a` doesn't really count.
Virtually every compression algorithm has two main parts, modelling and coding. Modelling either transforms the input into a more compressible form, or estimates how much is particular portion of the input likely. Those results are then coded into one or more sets of symbols with different probabilities, resulting in a compressed bit sequence. Both parts can be made as complex as needed and will likely to destroy any relation between the input and output byte. Many algorithms including Zstandard do come with some framing scheme to aid with data verification, but such framing is generally not enough for actual seeking unless each frame declares both the uncompressed and compressed size and can be independently decompressed from each other. That's what the Zstandard's seekable format actually does: both sizes for every frame, and an implicit promise that a new frame is generated every so often. (Zstandard frames are already defined to be independent.)
There do exist some compression schemes that retain the seekability by design. Texture compression is a good example; a massively parallel nature of GPUs means that a fixed-rate scheme can be much more efficient than a variable-rate and thus inherently sequential scheme. There are also some hybrids that use two interconnected algorithms for CPU and GPU respectively. But such algorithms are not a norm.
Wtf, you were the one who brought that up.