I'm not particularly familiar with QMC, although I do know of it.
I was just adding a comment in support of this not being a particularly revolutionary methodology because, although it may achieve spectacular results for certain systems, the limitations and mistakes of this kind of approach are completely hidden behind an opaque ML methodology of an NN.
ML has a place but NN are notoriously difficult to even grey-box and a black box model doesn't do much to actually advance the field. It certainly doesn't allow for a well-rounded assessment of failures.
As for the limitations of DFT, unless you are referring to convergence issues, I think you are completely wrong to claim that the issues with the method are not well-known and understood. We know precisely where the methodology has limitations and we also know how the functionals have been parametrised and we also know the assumptions / theoretical models upon which they are based and their limits. That is enough information to know where confidence can be placed.
I would also dispute that QMC is in the core of DFT just because it is used to parametrise LDA. As I am sure you are aware, LDA is not used for any reliable modelling. GGAs and Hybrids (And maybe meta-GGAs if we're feeling charitable...) are what make DFT a useful theory. Prior to that the results just sucked for the majority of systems!