There are numerous interstitiated mechanisms in hearing. I don't see why MBC in particular would be valuable, other than amplifying quiet parts automatically, which could in fact have the opposite effect.
However, I think there is potential for many other mechanisms to be developed, such as automatic filtering, to eliminate masking in frequency and time domains.
The question I have is why other companies, such as Apple and Spotify, don't simply add this DSP technology to their software. What can SoundFocus do that can't be copied? Proprietary algorithms?