Whereas I’d expect that texture-elements rendered during a scene, which might potentially stick around for only a few frames, would load to the GPU “uncompressed”, loading faster (less driver delay despite the increased size) but then taking more cycles of each texture-unit to process on each frame, thus taxing the GPU more.
I definitely know this was true back in the 90s in game consoles—data streamed from disk was optimized for the GPU; while compositing in data held as a raw pixel field in VRAM (because it was e.g. a minimap texture you were redrawing or somesuch) left much less GPU available for anything else to use.