The smaller the chunk (file) sizes, the less benefit you're getting from pushing the chunks through a CDN. More requests will come back to your origin server. And there's a lot of complexity in the pipeline from encoder -> origin server -> CDN that's mostly hidden (which is good) until you hit big performance cliffs (which is bad).
This is something that I think most people trying to implement low latency HLS and DASH have struggled with. It's not only the connection to the client that can stall. Your CDN can stall internally, too, waiting on chunks. And, in fact, if your CDN never ever has any internal performance issues, that's probably an indication that it's configured in such a way as to make the costs of delivering video through the CDN pretty much the same as the costs of delivering that same data through cascading WebRTC media servers!
Also, the CDN -> client link is TCP, so you're giving up the ability to just drop packets. TCP is going to do its lossless/ordered thing for you, which again is great for most of what we do on the Internet but starts to actively work against you when you're trying to get down to very low latencies.
Does that make sense? I tried to cover some of this in the footnotes. Apologies for not doing a better job.