True. It's not necessary. But most times in real life when this happens, the scientific rigor is the first thing to go. This can manifest itself in a lot of ways. One way is an explicit mandate to find that a pre-determined conclusion is supported, even if the data don't support it. Another way is to place a greater emphasis on speed of delivery than on accuracy, and to avoid quantifying the true trade-off between the two by invoking the magic buzzword "business concerns" or "bottom line" or whatever.
Yet another way is that data analytics platforms are built from the ground up with hard-wired priorty given to scaling out the ability to test multiple hypotheses without any attempt to correct the significance metrics for the multiplicity of testing (or, even subtler, for subject researcher degrees of freedom that further affect the multiplicity of testing). Often, the business stakeholders who are demanding such an "analytics" system aren't even aware of the statistical fallacies they are inexorably baking right into the platform itself (one might call this the "Hadoop disease", though it's not stricly the fault of Hadoop or Hadoop-like tools).
At any rate, I would say in the current climate of "analytics" in business environments, to a good first approximation, one can assume that "make it easy to understand" is exactly equivalent to "throw out any and all difficult yet rigorous science until the thing is cheap and easy, and then just use that."