Additionally, a lot of audio pipelines (even beyond the DAC - like amplifiers and similar) can end up with artifacts and harmonics in more audible frequencies - this is often more notable at extremely high frequencies (like 96khz and similar) - there's honestly nothing any human can actually hear near that range - but that doesn't mean it doesn't then affect audible ranges when actually played back on real equipment.
The big point is that "Being Able To Tell The Difference" isn't always the same as "Better Quality". You're often just replacing one artifact of the playback pipeline with another. Neither may truely match the original performance.
[0] https://sound.stackexchange.com/questions/38109/lame-why-is-... - while not an explicit "low-pass" filter, the default option of "-Y" does something similar.
first: I have done this test myself many times in various ways, including recreating albums as a mix of 16bit FLAC and v0 MP3 (track by track, not within tracks), putting them on, and listening on speakers. I can tell sometimes, but the v0 still sounds great.
I was able to distinguish the 3 rock recordings with confidence, high frequency transients sounded more impactful in WAV. The Queensryche in particular has a lot of (well applied!) dynamic compression on the acoustic guitar and vocal which really brings out those transients.
However, if I heard the MP3 in isolation I would not detect anything was off. They all sounded good.
The Morricone and Vangelis I had no conviction either way and I guessed wrong both times. I suspect in their recording/mixing/mastering a lot of high frequency sound was lost anyway. In either case, I don't know if the CD master was made from original tapes or not. I know the Blade Runner OST has had a convoluted release history. Morricone has a 2004 CD master which is pretty well liked.
"Moving Pictures" was recorded to tape, but was notably an early digitally mastered album. Maybe that has resulted in preserved high frequency sound.
Compressed audio is great, I love it and I use it a lot.
I use CD Quality for archival purposes and my home library.. for most of the past decade hard disks have been inexpensive. I convert to Opus 192 for mobile devices.
Another reason for CD Quality archiving - I have a long term idea of recreating a CD collection. I want to get printable CDs and burn the audio/print the art because I want my children to have the experience of going thru a shelf or flipping through a binder, putting the disk in the tray, pressing play. I always loved doing that.
Again, could I tell if I transcoded a well encoded mp3 back to redbook? Maybe not consistently, but it's more likely the transcode of mp3 -> CD would introduce more audible problems than the encoding of WAV -> mp3.
MP3 is fundamentally flawed and has audible artifacts no matter what the bitrate is. If you use a newer codec (AAC or Opus) you'll probably not notice anything.
But sadly today most popular music is ruined beyond repair with dynamic compression, not data compression. The craven stupidity of the loudness war may be unequaled in the history of art, and yet even the artists often don't seem to understand what the problem is. You see legendary artists complaining about modern sound quality (Dylan, Neil Young, and so forth) but then cheerleading for absurd sampling rates and bit depth. NO. That isn't the problem. I have 45-RPM records that sound better than their "lossless," "remastered" incarnations on streaming services.
The biggest problem in popular music (and I would say this probably pervades everything but classical at this point) is dynamic compression.
Today “loudness” is an aesthetic choice and good mixers and producers know how to craft a record that is both loud and of good sonic quality.
There is a place for both dynamic records (in the sense of classical or old jazz records) and contemporary loudness aesthetic.
Can inexperienced producers/mixers do a hack job trying to emulate the loud mixes of pros? Yes. The difference comes down to taste and ability to execute with minimal sonic tradeoffs.
Source: I have a long history producing, mixing, and mastering records and work among Grammy winners regularly. Very much in the dirt on contemporary records.
Not going to argue with you regarding dynamic compression, but after backing away from the worst excesses of the volume wars by mastering engineers in the mid '00s, things are sounding better to my ears. Dynamic compression can sound good (even in the extreme) if done for artistic effect. Like here's Beck's Ramona where the drums & cymbals have the tar squashed out of them with serious limiting, which to my ears nicely tames the sonics of Joey Waronker's spirited performance, while fitting well dynamically into the rest of the song. https://www.youtube.com/watch?v=e3yZ9OVjzbE
That said, maybe the engineers responsible for some of the worst dynamic squashing could be pressed into TV/film audio service where in 2026, there are still extreme volume imbalances between on-screen dialogue and everything else (hint the dialogue isn't loud enough and the everything else, especially crashes and explosions, are wayyy too loud).
I would never know the difference during casual listening. Only in this setting where I'm told upfront that there is a difference, do I notice it.
I don't make any claim to any special hearing or expertise. I've been listening to practically only lossy music since around '98, ripping from CDs at that time.
Morricone and Vangelis have been especially hard for me to tell apart, could have been a random guess on my part (I listened to those ~20 times).
When I read the title I expected to hear the actual _difference_ between the lossless and lossy waveform - i.e. only the actual artifacts. Could be a fun exercise.
because any of us from the late 90s/early 2000s who used the early versions of LAME will tell you in a second how easy it was to pick MP3 over raw, even at 320kb/s
Few audio things bug me more than the kind of tinkly pre-echo effects that were pervasive for a while.
Once you hear the difference in sound quality / see difference in image quality you cannot undo it.
I have become very picky with display resolution and text clarity, and it has not served me well. I miss the days I was happy with a 1080p monitor.
Now if you ask me that monitor is causing eye damage and I rather not use the computer that day vs use it.
On the other hand, the only sample in which I didn't hear ANY difference is Ennio Morricone's, to the point where I couldn't really tell it apart from its 56kbit/s version.
Can the hearing be selectively bad for some frequencies within the standard 20-20000 range, and normal for the others?
Yes. Your ears are acoustic filters just like microphones and speakers. When you get your ears tested, you get a chart that looks suspiciously like a speaker/mic response chart. Frequency in the x, db attenuation in the y.
So a person with bad ears could have fine hearing below say 5 khz, but with a sharp cut-off beyond that. Or it could be any other way round. Or you could have a notch in the middle. Calibrated hearing aids just take this chart and boost the frequencies your ears are attenuating. You can eq your own sound equipment based on the chart, to get a result that compensates for your ears.
When I first started encoding MP3s I used a 128kbps rate which is noticeably inferior to the original CD. I noticed this in the early 2000s when I would up listening to a CD of some music I usually listened to as a 128kbps MP3 and was blown away with how much more I heard.
I'd say that 192kbps is much better and the 320kbps that the author advocates is basically transparent.
Also, you can train yourself for what to listen for, to a point.
Of course this does matter to some people and I say "have fun".
I had Tidal many years back, and from the Lossless v Regular I only ever noticed a difference when it came to breathy sounds/etc. I did see that Tidal would burn through like 50GB of data monthly though.
Also - you may want to test some more modern recordings, the microphone/mastering quality of things nowadays is far better than what it was 2 decades ago (despite what some audiophiles may claim)
In practice, on average playback equipment (by which I mean decent hifi) in an average listening environment most people can’t tell the difference.
But… I’ve also done blind testing with a top mastering engineer on studio speakers and he was able to identifying 48 vs 192 reliably.
Mastering quality was ruined by the battle for perceived loudness. So masters with decent degrees of dynamic range is definitely helpful.
I've heard things get close using regular CD audio with some umpteen-channel DSP effects, but nothing like that from two speakers and a straight playback with no effects processing.
I've also had a binaural headset demo get really really close. I imagine it could be better, but this was for some generic model, not anything that is tuned to your own personal ear shape etc.