(I say this as somebody with a decade+ of experience running large ensembles of classical MD simulations, but not so much experience with inorganic DFT)
Also, at the end of the day, DFT is still an imperfect approximate model. Relative trends are generally more reliable than exact correspondence with experiment, and it can have system-specific systematic errors that are hard to account for in a high throughput setting
Edit: Also look at how long these (short pre-print) DFT articles are. These aren't simple calculations to interpret.
Turn the material science problem around: instead of looking for a substance that has a specific property, look at many substances until small amounts of any interesting property (young's modulus, etc) show up. By looking for "anything interesting" you are more likely to find something of interesting (ideally, several somethings). And then you also know a starting place to begin optimiziation.
(I'm not saying these things out of ignorance; this technique has worked well for me at times when I had exceptionally large amounts of CPU available to me, and it's also worked well in the drug industry, which has similar problems to material science.)