This doesn't match with the common knowledge on the topic, which is that model size is more important than the architecture. And training size is even more important, which is why single digit billion parameters are strongers than hundreds-of-billion ones from several years early when “Chinchilla optimal training” was in fashion.
SSM are literally the proof that all that really matters is training scalability.
The Universal approximation theorem doesn't care about the architecture after all.