That said, this over-zealous AI modification isn’t always a bad thing, as the democratization of music recording has ushered in an era where many more people of various aptitudes are mixing music! But it does mean that AI-based mastering solutions run the risk of ossifying the “sound” of music genres, and while they can gussy up a poor mix, they often clobber a mix that’s already great. I don’t know of many major label albums where any of the tracks were mastered with an AI-based solution, although Ozone is in extensive use in the industry for its non-AI features.
Disclaimer: I make a non-AI mastering audio plugin called Master Plan.
I think like all of these new AI tools, it can be a tool for you to use, but an amateur with untrained ears will still not make "professional-grade" masterings. But you can get maybe 80% there. If you're a hobbyist and don't know anything about mastering and can't afford to pay a professional, this is great!
There's quite a few people who hate the modern mastering, so... maybe that's not a bad thing? Also considering the pro grade masters are often aimed for wildly different setup than the cheap earbuds many people will use. I'm not trying to say pros don't know what they're doing. But also, I'd love to see people play a bit more. (Still dreaming of a real release format that contains the raw elements + effects stack)
Then again, I'm a weirdo who honestly prefers some accidentally preserved band practice recordings with noise and mistakes and raw energy to their released official albums.
to me the whole 'mastering AI' stuff misses at least some of the point of what mastering really is, but i suppose it'll be useful for people doing stuff on their own who wouldn't pay a proper mastering engineer anyway.
Essentially, consumers now might pay X amount to get only a 40% solution (probably unusable and not something a company would even offer), but with AI, the same X gets you an 80% solution.
The price of >80% solution will remain the same, until AI improves.
> the system spits back a song that hits modern loudness standards and is punched up with additional clarity, EQ, stereo width, and dynamics processing
Translation: "AI" can now make all recordings indistinguishable from each other?
'Standards' in art are dangerous.
For loudness there are certain "standards": https://support.spotify.com/us/artists/article/loudness-norm... - it just makes distribution of your music much easier.
There are probably (or soon will be) different presets for mastering a rock track or an EDM track or a jazz track, with different targets for dynamics, loudness etc.
But for this to really work, I feel like I'd have to sit there going: "Okay, same thing but bring the lower mids up a couple of dB. Try it with a little more air in the top end. Compress those drums, but try not smear the transients on the snares. No wait, go back. Try smearing those transients a bit more."
At that point, I'm not sure "mastering as prompt engineering" is worth it. Though I totally agree that things like LANDR and Ozone are great for quick-and-dirty work.
https://support.apple.com/guide/logicpro/mastering-assistant...
I think it is odd to say that something is mathematical, when all things can looked at through the lenses of mathematics.
Also for tasks that have clear indications of varying degrees of success. Like throwing a ball in a basket. Yes absolutely computers and AI will in time do them better than humans ever will.
But for music once you make a song that is perceived as "good" or "not bad", there is no such thing as better or worse, it becomes entirely subjective. So I do not know if it is possible for anything to be better than something else. For composers and music makers we often assign celebrity status and perceive some greater than others, but really the music some create is not better than the rest.
Maybe AI will be able to demonstrate technical prowess beyond human ability like be able to instantly write down all 15 sounding notes in a given beat in a song. But it does not make sense for it to better at creating music than humans in general.
I really like hacker news but the users here have very strange perspectives on art.
From what you say I infer you couldn't, you need to know about their life, their struggles, their views on things, etc.
Some people can actually enjoy just the pure sound. Electronic club music tends to be like that, where if you listen it in a club or on the radio in a long mix you might have no idea what song is being played, the vocals and lyrics are ultra-minimal so no "human expression" there.
This is why we don't listen to MIDI tracks all day they are a purely mathematical interpretation of a song and they're mostly terrible.
There's a video game called Sonic Chronicles--just before it was completed, the developers were prevented from using the soundtrack they had made and were forced to recompose new songs for the entire game for reasons that remain murky. The result are some almost incomprehensible pieces of minimally instrumentated MIDI music like [this](https://www.youtube.com/watch?v=6o47N-aYd08).
Yet despite sounding like complete ass, those songs were actually lifted from previous Sonic games, where they sound [completely different](https://www.youtube.com/watch?v=t4-DQ4hGRJo) despite being structurally identical.
Also, I wonder if there will be a natural “backlash” against generic AI music and towards stuff that has more human imperfection. But then again even the AI will learn to imitate the imperfections so what exactly will make a piece more human in the long run?
MIDI sounds bad because while the notes being played respect a mathematical structure, the MIDI instruments spectrums do not integrate properly into said structure.
People tend to like the same pieces of music, so there is obviously specific patterns that people generally like. AI is good at finding patterns so in the end AI will become good at creating music.
I think.
Source: I randomly socialize with a professional classical musician that does both playing and composing/analysis.
To the very few that care only about the sound waves, AI will triumph because it will craft music that is perfect for that specific persons taste, and even include other parameters like tone, mood, state of mind, place of residence, weather...
From 2019: https://www.youtube.com/watch?v=EPs6wdM7S3U
An AI might sort of make producing knock-offs of popular styles easier, but it'll also probably hasten the decline in popularity of those styles as well. One popular song is great, 3 or 4 copies of that style might still be good, but 10,000 indistinguishable copies of it -- meh.
I think what you might actually see is something like this:
A dj comes up with a new sound, and people might like ask an ai to produce a whole mix of songs similar to that record, just for them, that nobody else will ever listen to. Or even better -- a DJ might come up with a new sound and _sell an AI themselves_ that can produce songs like that on demand.