Not the kernel, but ntfs.sys. It is a design limitation of NTFS and was a tradeoff for something else. At the time NTFS was designed, high frequency reading and writing to small files was not at all common.
This does not exist on FAT/32/64 partitions, though there is always a per-file overhead on any filesystem, and FAT filesystems have their own problems.
IO performance tools don't seem to test reading and writing to a large number of small files; they tend to want a single large file and they test performance to and from that file. That's by design, and that means those tools don't find filesystem design limitations, or allow you to measure certain types of performance on a per-filesystem basis.
> Edit: It shouldn't be a surprise that Visual Studio is essentially abandoned by Microsoft.
Again, not true.
No one can know everything that MS is doing, of course, but the number of people who think they do is quite high. I am not referring to the person who made the comment I am replying to, by the way. Generally I just see a lot of things about MS or MS tools that are stated as fact and are entirely incorrect.