The effect of bit depth has little to do with how you perceive the sound; what adding more bits does is allowing for more dynamic range, i.e. more difference between the loudest possible and the quietest possible sound. More bits brings down the noise floor. This means that for example the final part of a fade-out retains more detail at 24 bits than at 16, but this difference is not something that you would be able to observe in normal listening conditions.
If you like to learn more about the effects of bit depth, I would recommend “Digital Show & Tell” by Xiph Mont at https://www.xiph.org/video/.
That really doesn't make any sense. The bit depth provides for a dynamic range, meaning the difference between the loudest and quietest sounds which can be encoded. 16 bits is enough to go from "mosquito in the room" to "jackhammer right in your ear". Congratulation, 24 bits let you go up to "head in the output nozzle of a rocket taking off" with room to spare, that's… not very useful?
Now what might make sense — aside from plain placebo — is a difference in mastering. For instance lots of SACD comparisons at the time were really comparing differences in mastering, with the SACD converted to regular CDDA turning out way superior to the CD version because the mastering of the CD was so much worse.
The "Loudness Wars" is an especially bad period of horrible mastering, and it went from the mid 90s to the early-mid 2010s (which doesn't mean that regular-CD has gone back to "super awesome", just that you're unlikely to have clipping throughout a piece these days).
When people talk about 24 bits (and >48kHz) in the context of "audiophilia", it's generally about the data at rest and "HD audio" (aka 24 bit music files and downloads). Not about the bit depth of the processing pipeline for which it's generally acknowledged that yes, >16 bit depth does make sense for the audio processing pipeline (as well as the original recording).
Unless this was a double-blind study and the audio levels were exactly the same between runs, this is useless data. Even a 0.1dbSPL difference between runs is noticeable (people gravitate to louder sounds as better).
> every time I switch sound cards to 24 bits
This may be related to the sound card. I use an external DAC, not a soundcard, as most soundcards that come with computers are not up to par.
Changing 16 bits to 24 bits should not change the audio in a way that is discernible to the human ear.
> This may be related to the sound card. I use an external DAC, not a soundcard, as most soundcards that come with computers are not up to par.
For simplicity, I did not talk about them separately, BTW, following your logic there is no point in bying DAC, unless there was a double-blind study, comparing these DACs to cheaper sound-cards. Both are 16-bit/48000, are not they?
> Changing 16 bits to 24 bits should not change the audio in a way that is discernible to the human ear.
This a bold statement, which begs a proof itself.
Dynamic range is not loudest sound / quitest sound ratio (as would one expect), but loudest sound / noise level ratio. Otherwise you would need to count additional bits to encode quietest sound with low enough quantization noise.
Threshold of hearing could be as low as -9 dB SPL, so one wound want noise level below that. Therefore with 96 dB dynamic range from 16 bits the loudest representable sound would be say 86 dB SPL. But symphonic orchestra music may have peaks way above 100 dB.
The audiophile world would do well to adopt the concept of double-blind study.