Yes, quantization isn't anything new, nor are audio codecs. As you point out, though, it's not just about designing a quantization scheme to reconstruct an analog signal. The scheme itself needs to be "easy" for current model architectures to learn and decode autoregressively (ideally, in realtime on standard hardware).
The blog post addresses this directly with samples from their own baseline (an autoregressive mu-law vocoder), and from WaveNet (which was similar architecture). The sound is mostly recognizable as a human voice, but it's unintelligible. The sequence length is too long and the SNR for the encoding scheme is too low for an generative/autoregressive model to learn.
This is what the neural codec is intended to address. Decoupling semantic from acoustic modelling is an important step ("how our ears interpret a sound" vs. "what we need to reconstruct the exact acoustic signal"). Mimi works at 1.1kbps, and others work at low bitrates (descript, semanticodec, etc). Encodec runs at at a higher bitrate so generally delivers better audio quality.
Now - why are neural codecs easier to model than conventional parametric codecs? I don't know. Maybe they're not, maybe it's just an artifact of the transformer architecture (since semantic tokens are generally extracted from self-supervised models like WavLM). It's definitely an interesting question.