The two fat tail questions one has to engage are:
- is it possible that a catastrophic input might be lurking in the wild that would not be present in a typical training set? Even with a 1M instance training set, a one-in-a-million situation will only appear (and affect your objective function) on average one time, and could very well not appear at all.
- can I bound how badly I will suffer if my system is allowed to operate in the wild on such an input?
DL gives no additional tools to engage these questions.