"“Why Should I Trust You?” Explaining the Predictions of Any Classifier": https://arxiv.org/pdf/1602.04938.pdf
https://homes.cs.washington.edu/~marcotcr/blog/lime/
https://github.com/marcotcr/lime
Anytime anyone makes snide HN comments like "oh you can't understand why neural networks make predictions" the correct response should always be "why doesn't LIME work in your specific case".
LIME is being used within the EU to explain credit decisions and fraud detection flagging on neural network based models, which is quite a high bar to regulatory oversight to pass.
In this case, I understood the question to be "will deep learning do a good job predicting the function of/phenotype emerging from individual SNPs" and I don't think model interpretation would help (for starters, the model is trained to predict linkage and doesn't deal with data related to phenotypes).
Of course the NN won't interprete the results, it will just provide better results.