If your prod environment is hundreds of terabytes then making good dev environments is even more crucial and you can’t run things locally.
If you’re running hundreds of terabytes then the systems in place to shard that data must be well tested.
Migrations must happen on similarly sized data, along with various distributed transaction guarantees because I doubt you’re going to be using dedicated-attached storage for that. And if you do then testing multipath needs to be part of your testing too.
Is it expensive? Yes. But that’s what working with that amount of data costs.
Or is this a strawman intended to stump me, because I have dealt with such “data requirements” before and when they saw the sticker price of doing things properly suddenly those hundreds of terabytes weren’t as “required” anymore.