https://www.20k.org/episodes/loudnesswars
It's pretty standard for that podcast, which features lots of stories about sound and sound engineering. There's another interesting article on creating the most silent possible room, and how eerie it feels to be in it. https://www.20k.org/episodes/silence
Another on how restaurants got louder and louder over time, that deep bass sound that's taken over film, another on scientific experiments trying to measure if the Stradivarius is as great a violin as everyone insists, another on the tangled history of the iconic Price is Right theme song... so many really.
Great podcast, lots of range given the topic.
No subscribe via RSS option either, only Apple Podcasts and Google Podcasts. This is becoming depressingly common.
Download: In the player click "share" and then click the "down arrow" icon.
RSS feed: In the player click "subscribe" and then click the RSS icon (it's the first one)
Since YT and Spotify are some of the predominant ways people listen to music these days, and they normalize loudness, music producers are starting to go back to more normal masters, thankfully.
I'm very pleased by everything I just read in this link — they really think of the user first, and favor music quality and the general experience. It's exactly how I did/do it myself, manually.
Awesome, really awesome to learn that the loudness war is finally coming to a close.
Really glad to know Spotify is on the side of science, here.
[1]: This blog has got to be the most interesting I ever read on whatever topic it touches, including the objective vs subjective debate: http://nwavguy.blogspot.com/2011/05/subjective-vs-objective-...
This avoids the problem of cruder methods, where you take, say, the peak dB as the "loudness" of the track. Doing it that way would take a fairly quiet song with one overly dramatic drum hit and make it so you couldn't hear the rest.
EDIT: I think they will also turn a track up if it's under the target, but I'm not absolutely sure. There would be no real reason not to.
> ReplayGain is different from peak normalization. Peak normalization merely ensures that the peak amplitude reaches a certain level. This does not ensure equal loudness. The ReplayGain technique measures the effective power of the waveform (i.e. the RMS power after applying an "equal loudness contour"), and then adjusts the amplitude of the waveform accordingly. The result is that Replay Gained waveforms are usually more uniformly amplified than peak-normalized waveforms.
Generally we indeed target both up or down in loudness (although 'up' is quite rare, typically high-end recordings of classical or jazz or otherwise very dynamic pieces).
Honestly, everything I read screams that Spotify really hired the right people to make these decisions.
Now I only have to rent about their handling (or rather, lackthereof) of metadata — how hard is it to display an actual first release date? but meh. Audio quality is great, no question about it. I just wish they'd stream FLAC in very-high quality + "quiet" environment settings (the "pure hi-fi" experience so to speak), but that'll come in due time I suppose.
I guess they don't want to bother with limiters, so understandable.
There's a bigger problem, though. My girlfriend complained about volume fluctuations in her car between tracks. She uses an iPhone so I enabled "Soundcheck" (Apple's normalisation). It works fine in the car. But now she complains that headphones simply aren't loud enough. It turns out that the volume controls are essentially made for overly-compressed, loud content and can't sufficiently amplify dynamic content.
So, unfortunately, people will just turn it off and there's still an advantage to mastering "loud".
I also wonder if the resurgence of vinyl, and rise of home producers on bandcamp/soundcloud/etc have helped contribute.
Spotify normalizes volume to a LUFS level, which is not nearly the same thing as overly compressing dynamics and causing clipping.
https://www.sweetwater.com/insync/what-is-lufs-and-why-shoul...
Then the buses started announcing station names outloud. As the announcement is loud and the bus windows are open, I get woken up from it almost every night.
Next street lights started to beep for blind people. While this is a great idea, in practice the level of sound they set makes sense for the day (when the street at busy), but for the night it is way too strong, going into people houses. I got used to sleep with a constant weak beeping sound.
Then the e-scooters came, and created three sources of new noise during the night. One when someone touches them ("stealing alert!"), one when someone is looking for them ("location beep") and the last when the van of the company comes to pick them up around 3-4 am.
sigh, I'm moving to a cabin in the desert.
"Among the noises that sound around me but do not distract me, I count passing carriages, a carpenter somewhere in the building, a nearby saw grinder, and that fellow who demonstrates flutes and trumpets near the Meta Sudans, not so much playing them as bellowing. Even now I find noises that recur at intervals more bothersome than a continuous drone."
You can find more of his thoughts on the matter including his wonderful description of living above a bathhouse in Letter 56 to Lucilius.
A bright-blue headlight of a modern car will actually make everything surrounding the headlights darker. A light which is slightly dimmer and more shifted towards red works much better for your peripheral vision. If you drive in rural areas the difference is very apparent.
Not only that, but street lights seem to be doing that as well.
Cross-walks here are now illuminated along the path with a strong shaped light. But the light is so bright that during night they just blind the observer: the pedestrian looks like ghost in a black background. IMHO this is even more dangerous, I frequently cannot see past the crosswalk, so a pedestrian which is passing behind it is risking much more than before. Go figure.
There are a couple of intersections which I pass frequently where the green light is too bright already during day. During the night, as soon as you get the green light you get blinded, which is _awesome_ since the light is guarding a cross-walk in this case as well. By night you cannot see pedestrians when you have the green.
The construction companies (who have to pay their own insurance premiums and generally avoid unnecessarily risking injury to people by blinding drivers) have long since toned down the lights they used (there was a short time period where they all had super bright lights because LEDs were new and cool so why not have a 1000W flasher), switched to the non-glaring light plants that use the canvas bags.
Slightly related, still widely off-topic: lights on bicycles, same problem. I think e.g. Germany has rules for those, but in countries where there aren't and/or there's zero awareness being raised it's sad to see how many people ride around with LED headlights which are way stronger than required and aimed straight ahead instead of down, completely blinding opposite direction traffic including cars. Now on the commute I do I've made it a sport to (friendly) shout to them telling to put their light down and it seems to have effect. But there's still a long way to go.
See also bicyclists.
There is a general saying that there are two types of lighting: those that allow you to see, and those that allow to be seen. In most urban areas with street lamps, one simply needs to be seen. Too many of my fellow pedallers do not seem to understand this distinction, and do not realize that more lumens does not always been better: at some point you're simply blinding people, and they can no longer tell where you are.
A moderately (500 lm?) bright blinking light, with a simple on-off, non-random pattern seems to be best IMHO.
Chances are that the car behind you had misaligned headlights or some dubious aftermarket modification, like a HID conversion kit.
An old crummy incandescent bulb in a 90s civic will produce a lot less light at the same wattage, than a new, super-bright LED.
Moreso than ever in the past, people listen to music in public or otherwise noisy environments. (worse yet) They listen with generally low quality headphones/earbuds.
Basically, if the music isn't compressed, they won't hear most of it.
Of course, this plus the crappy earbuds and the noisy surroundings means they listen at higher volumes and suffer more hearing damage, which leads to less human sensitivity to sound levels (and more desire for compressed, high volume music).
It's very rare for most of us to get to sit in a good room with a good system and listen without interference... but it's really, really nice to do when you can.
[0] https://www.sweetwater.com/insync/what-is-lufs-and-why-shoul...
Many games let you specify the audio output, and then adjusts the mixing accordingly. This would be something similar I guess, but the mixing is probably best done at the streaming provider, to save bandwidth and other resources. (For static content like music it only really has to be done once per output type.)
I'd like something like this, I have a vastly different sound setup at home than on the go, but listen to mostly the same tracks. Most sound pretty good on my home setup, but on the go I often find myself adjusting volume up and down between tracks, despite things like normalization being on.
Then, as an end user, we could choose which we preferred. By default there would be no change in behaviour. Where software updates are available, an the option could be provided to swap between versions at will. For older devices, the streaming service could let the end user choose the high dynamic range version as an account-level default.
Personally I’d want ready access to both versions. When listening to music is the singular activity in a quiet environment, full dynamic range is great. But as soon as I’m not solely focused on music, multitasking or in a noisy environment, I’d actually prefer the compressed version.
* https://en.wikipedia.org/wiki/ReplayGain
(I have no knowledge in this field, so am asking out of genuine curiosity.)
It's not possible to separately compress constituent stems in a mix. Once they're mixed, the impact that you can have on "loudness" is restricted to the whole, frequency-separated bands (multi-band compressor), or extracted parts. From an information-theory perspective, the available leeway for control over presentation is restricted once the audio is mixed.
However, I'd rather have a simple compressor running on playback than have a whole industry ruin their content.
To be fair, I believe mastering is one of those domains that may soon be made obsolete with machine learning. But we're not quite there yet, and the existing methods require a lot of DSP power.
Once that's taken care of, then the audio can be appreciated. When I last owned a car, it was an enormous Lexus. The audio experience was sublime just with the premium factory system. But with the audio off, the cocoon was still so much better than what I've experienced in "regular" cars.
Unless you're in a Rolls Royce or Maybach while driving, I seriously doubt you'll tell the difference between FLAC and MP3. There's just too much external interference to compete with the subtleties that a high end audio signal can offer compared to a more typical system.
How many cars actually support FLAC or Apt-X HD?
https://apps.apple.com/us/app/boom2-volume-boost-equalizer/i...
But I came here, actually, to put in a plug for a plug(in). He worked with some folks to create a VST called "Loundness Penalty." You throw that on your master channel, and it will tell you what the various streaming services will do to your audio (for example, by how many dB Spotify, Apple Music, You Tube, and so forth will turn your audio up or down based on the integrated LUFS reading).
Even more impressive is a plugin he worked on called "Perception" that can avoid the inevitable psychoacoustic bias (to which we are all naturally subject) of thinking that because this track with this effect chain is louder than without it (or with another effect chain) it must be "better."
These are commercial products, available from an outfit called MeterPlugs (https://www.meterplugs.com/).
I should say that I've never met Ian, and have no connection to this company. I do work with audio in a professional context, however.
I should also say that I really think the tide is finally turning on all of this. I don't know that the "loudness wars" are absolutely in the rear-view mirror, but I think there's a lot more awareness of the issue. And I also think that fly-by-night mastering engineers who proceed to crush the hell out of your tracks with limiters and whatnot don't get as many repeat customers as they once did. And there's better metering in general for getting a track to have proper dynamics without having the listener feel the constant need to turn the volume up and down.
Going to nights where these super-compressed tracks are often played at unimaginable volume levels, You must wear ear plugs, otherwise you're guaranteed to go deaf.
When music is compressed, dynamic range is lost, and the actual elements in each track like bass and ambient sounds are limited in how they can create a mood in a track. These loud tracks are impressive in terms of grabbing attention, but over time they create fatigue for listeners, and people tend to not want to listen to additional music after a few loud tracks.
There is a high price to pay for continuing down the path of loudness with music.
The main driver for ending the race in music should be doing whats right, music should have dynamic range as the main focal points of audio engineering. Until that gets sorted out, bring your ear plugs.
And yeah, that's a famously "loud" genre. What drives me insane is going to, say, a mostly acoustic live show and having the person behind the desk mix it as if it's Drum & Bass. I never ever go to a show without earplugs, because you just never know.
is there a summary anywhere of which services are better and worse in this department?
So, for example, Spotify enables normalization by default, they target -14 LUFS, use limiters, normalize both tracks and albums, and will turn up quiet tracks. Amazon, by contrast, does not use limiters, does not turn quiet tracks up, only normalizes at the track level, and so forth . . .
But compared to an album like Spies 1988 Music of Espionage which has a dynamic range that used the most of the CD format (especially track 3 - "Interlude" -- listen for the drum hits at 0:39 after the organ chords), it just isn't (technically) comparable.
I have stopped buying older CDs off Amazon, switching to Discogs so that I can be sure to get an original pressing - made before it had been "remastered".
My tinnitus came from being around jets & equipment in the military, concerts (Rush's Presto tour in 1990 was super loud), and competitive shooting. I double-up on my hearing protection these days.
He hated the loudness wars. It was, often times, bands and labels insisting that the albums be "louder" than whatever other comparable album they were trying to emulate, and thus it just escalated with no real sense until it just got ridiculous.
Most home audio receivers have user-configurable dynamic range compression, which compensates for the "quiet voice, loud music" problem. You can often also boost only the center channel, which helps.
That said, I've bought a few modern albums which have been really nicely mastered; they've got that rich, deep '70s LP sound but available on a FLAC download without all the surface noise and rumble.
Also that section on the '60s loudness war underplays quite how LOUD some of those old mono 45s are... I typically set things up so my peaks are at -12dB when recording, and I have the odd mid-1960s single which will be pegged right up against the red if I don't adjust the levels. No wonder there were some pressings that were notorious for throwing the stylus out of the groove when people started chasing ultra-low tracking weights in the early '70s.
Dnb vinyl was very popular for a long time (still has its fans), and it was very much part of the loudness war. There are a couple of producers who were part of my record collection that I sometimes actually would not play because I couldn't be bothered dealing with the ridiculous over-mastered loudness in the track compared to what I was trying to mix them into.
When you clip a more complex waveform, you can generate new partials with non-integer multiples of the fundamental frequencies, which sound dissonant. If it's simple enough (e.g. a guitar chord), the dissonance will be small enough that it still sounds musical, but if you clip something complex (e.g. an entire mixed-down track), it will just sound like noise.
See https://en.wikipedia.org/wiki/Intermodulation
Clipping of individual parts is integral to RATM's music, but I don't think clipping of the whole thing at once is. See their entries in the Dynamic Range Database:
http://dr.loudness-war.info/album/list?artist=rage+against+t...
I haven't actually heard the SACD release of Evil Empire, but with those numbers I'd be surprised if it was noticeably clipped in mastering. (Note that vinyl mastering can increase the DR measurement without audible changes in sound, see: http://wiki.hydrogenaud.io/index.php?title=Myths_(Vinyl)#Eff... , so I'm much less confident about the vinyl releases with similar numbers.)
Makes RATM sound like easy listening. Which I suppose it is these days anyway.
It basically has two states: 'on' and 'off'.
However, it DOES serve an artistic purpose.
If you get a decent pressing of Motörhead's 'No Sleep 'til Hammersmith', though, you'd be surprised at the dynamic range actually offered, contrary to popular (mis)conceptions about the Motörhead sound... :)
Unfortunate the site doesn't have a graph of loudness vs time.
They used real cannons, and the recording & mastering engineers did their best to reproduce them faithfully. It is one of very few albums I have with a prominent warning sticker on the album which is there for a good reason, not for bragging rights - it basically says that unless you are quite conservative with the volume knob, your speakers will die once the cannons do their thing.
I still can't bring myself to hit "Mastering Assistant" (still less, "Mix Assistant" in Neutron) and call it a day. But honestly, it's pretty spooky how good these tools are getting. I'm particularly impressed by the way it figures out where to set dynamic EQ points on problematic frequencies. I haven't played with Gullfoss yet, but the pros I talk to say it's even spookier.
Of course, what you may mean is that there's no substitute for a $50,000 Neve console, a treated, isolated room, monitors that cost considerably more than your car, and an engineer with twenty years of experience. Sure.
But honestly, it is reasonable to wonder whether this is a bit like when we used to say, "Well, a computer will never beat a human at chess."
It's also not true that no professional mastering engineers use Ozone. Most big shops at least have it lying around (if only for that limiter, which many pros emphatically do use). It's also sometimes the right tool for the job. And there are a few serious pros for whom it's their main tool. Not many, granted. But interface aside, it's not clear that when it comes to "clean" processing, that FabFilter's stuff (which is very widely used) is any better.
Jukeboxes became popular in the 1940s and were often set to a predetermined level by the owner, so any record that was mastered louder than the others would stand out. Similarly, starting in the 1950s, producers would request louder 7-inch singles so that songs would stand out when auditioned by program directors for radio stations.[1] In particular, many Motown records pushed the limits of how loud records could be made; according to one of their engineers, they were "notorious for cutting some of the hottest 45s in the industry."[3] In the 1960s and 1970s, compilation albums of hits by multiple different artists became popular, and if artists and producers found their song was quieter than others on the compilation, they would insist that their song be remastered to be competitive.
I had a couple of Noisia records that sounded ridiculously louder than all my other music. I guess it's part of the "authentic" vinyl mixing experience to have to manually adjust a turntable trim mid-mix to avoid it sounding ridiculous.
Anyway, this is one of the concepts that is forgotten when people discuss music in 5.1: Part of the point is that a 5.1 mix can be a "living room" mix, where the engineer can make an uncompromised mix, and leave the compromises for listening in a car, or with crappy headphones, in the stereo mix.
I always felt that Netflix' audio quality was rather poor and lacked punch as well as dynamic range in a proper home cinema setup, but assumed it was due to their codec or streaming technology choice. However now that you mention it, it might as well be deliberate to optimize for subpar watching environments (laptop speakers, soundbars, etc.) - indeed it sounds as if their content was mastered for TV rather than cinema.
These days I just use Spotify :)