Still running true ab initio QM simulations of a few atoms can take months on a single computer (never had a chance to run simulations on a cluster or GPU). DNN’s however have the ability to find higher dimensional patterns difficult for humans to find but which could significantly speed up QM simulations.
Currently doing QM simulations of chemical reactions for any number of reactions is in feasible but if work like FermiNet could make it feasible for small teams to simulate more complex chemical reactions it could open up an entire field of chemical/industrial processes to startups. As in you could reasonably simulate chemical processes sufficiently to optimize current process or find entirely novel reactions. This would significantly reduce the capital expenses most research in these areas require.
In short if I were a VC I would be _very_ keen I’m watching this field. There tremendous value hidden behind this general problem.
As someone who used to be in this very space and even tried to get a startup off the ground based on it, I can tell you with absolute certainty that this will lead absolutely nowhere.
The short of it is that literally no business will accept data that is generated this way until someone shows that every neural net model trained in this way produces solutions that are mathematically equivalent to a validated method.
At best it might be used as a filter step in some pipeline, but that's not going to have much of an effect, and certainly not something on which to bet the success of a startup.
I'm quite bullish on the other hand for using NN in chemistry. If you use NN at a higher level (for example predicting which reactions will happen and finding pathways), it makes a lot more sense, it's akin having an experienced chemist. A tool like Chematica (database of chemical reactions to find pathways which got bought by Merk) can almost certainly benefit from neural networks where you can try to find similarly-shaped molecules that will have a similar behavior but be cheaper to produce.
I'm also bullish, on the use of all the NN machinery, for the use of simulators. NN frameworks are one way to be able to use efficiently large amount of compute. In fact there are kind of dual problems : In machine learning you try to minimize some energy function while in Hamiltonian Physics simulation you try to keep the energy function constant.
Making approximations in these equations (for example in the way to represent a field, summing in a different order, neglecting zeros), can often result in faster computation for the same accuracy. In fact, if you write the simulator as a Monte-Carlo sum, you can train a NN controller using the trajectory approximation, to do what you want to do, folding the exact simulation computation time inside the training loop of the NN.
I say that as somebody who has evaluated VC pitches for O(n) approximations of QM as a startup idea.
Any physical model gained by fitting data which purports to be faster than those based on a computable approximation of the laws of physics is so constrained. There is room to maneuver however. If the model being replaced has a limited and known domain of applicability due to approximations made for tractability, a fitted model with suffciently large capacity and expressiveness will for sure improve things.
It's just that it's unlikely to be generally applicable without violating what we know about physics, which is why I am skeptical of the latter part of your post.
Constrain the model so that there aren't any superfluids, semiconductors, plasmas, metals or Bose-Einstein condensates and you can still simulate any medicine I know of.
https://arxiv.org/abs/1909.02487
For those with a background in ML / AD, but without background in quantum mechanics: there are some important observations that may be useful in other domains in supplementary materials sections B and C of the paper