That may be
your experience, but certainly not a universal experience (and apparently not the author's, either). In my experience, it's pretty easy to have CSVs (or Parquet files, or whatever) that are tens or hundreds of GBs in size. The space savings from a more modern file format are significant, as is the convenience of being able to specify and download/open only a subset of rows or columns over the network. Most of us don't have workstations with 50GB of RAM, because it's far more cost-effective to use a Cloud VM if you only occasionally need that much memory.
That being said, the real point here is that folks blindly use CSVs for internal-facing processes even though there's no particular reason to, and they have plenty of drawbacks. If you're just building some kind of ETL pipeline why wouldn't you use Parquet? It isn't as if you're opening stuff in Excel.