> The correct way to compare software A and software B is to benchmark both on the target platform/hardware they were respectively written for. Afterwards, do a cost-benefit analysis.
Well, ideally, yes, if we had infinite time. In reality we don't, which means that we have to choose what to do without the benefit of being able to implement-thrice-deploy-N-times[0]. In practice, what happens is that we (as "engineers"[1]) form rules and patterns in our heads which we use as guidance. I think the point being made is that "use a cluster" is almost never good guidance.
[0] How can you know what the performance is without actually giving your product to a bazillion users? This hints at why just-deliver-it-now-bugs-be-damned and continuous feedback is so valuable. There's no point optimizing a product used by 1000 people, but if your platform ends up being used by 1e9 people (e.g. Facebook), then you'll make ADJUSTMENTS ALONG THE WAY. This is a GOOD PROBLEM TO HAVE.
[1] A laughable term for most of the programmer crowd, myself included. Engineering is about tradeoffs and we still have basically no idea about tradeoffs in software development.