Even game consoles moved into software accelerated audio, as it turns out doing it in software, with CPU vector instructions is fast enough, while being more flexible.
This is also the way of the future for graphics, do way with any kind of hardware pipelines, and go back to software rendering, but having it accelerated on the GPU, as general purpose accelerator device.
EAX and the like were actually that - software components running on DSP inside sound card, and it was supposed that they would be something you would handle in the future akin to how GPUs are programmed.
However while audio accelerators came back the protected media path business means they aren't "generally programmable" from major OS APIs even when both AMD and Intel essentially ended up settling on common architecture including ISA (Xtensa w/ DSP extensions, iirc), and are mainly handled through device specific blobs with occassional special features (like sonar style presence detection)