There are things we can do to suppress errors and make a slightly bigger analog computer (classical or quantum) but there is no way to make an analog system scalable (i.e. not just lower errors to some floor, rather completely eliminate errors).
The book cited in my previous comment explains the details in a fairly understandable fashion, but it takes up a whole chapter. The gist of it is, you need some minimal "logic distance" between your data "levels" in order to be able to distinguish them and correct deviations. This requires digitization.
If somebody finds a way to error-correct analog representations they will have a way to solve NP-complete problems for instance. The Nobel price will be the least of their recognitions.