I mean, sure, they’re rasterized—but are they rasterized to the same native texture format as bitmaps are? I know that for assets in games, the “read-only” assets loaded from disk come with rasterization metadata (or are already transcoded to the native texture format, depending on architecture) such that you get more efficient VRAM consumption at the expense of loading time (this being why games can take so long to load a scene when GPUs have even more PCI-e lanes than NVMe memory does.)
Whereas I’d expect that texture-elements rendered during a scene, which might potentially stick around for only a few frames, would load to the GPU “uncompressed”, loading faster (less driver delay despite the increased size) but then taking more cycles of each texture-unit to process on each frame, thus taxing the GPU more.
I definitely know this was true back in the 90s in game consoles—data streamed from disk was optimized for the GPU; while compositing in data held as a raw pixel field in VRAM (because it was e.g. a minimap texture you were redrawing or somesuch) left much less GPU available for anything else to use.