I think that's a productive analogy that can be read in many different ways!
For example, you could argue that Haskell makes tradeoffs to provide benefits that most developers don't care about most of the time, such as code terse enough to include in papers with a page count limit; and that this is a valid reason for people to prefer languages that prioritize other things, which is why Haskell hasn't caught on (thus the Haskell motto "avoid success at all costs").
Or you could read it as a clueless argument ascribing disadvantages to Haskell that it in fact does not have. This would be more difficult if the person making the argument were Simon Peyton Jones, since he presumably isn't mistaken about Haskell as such, but he might still have erroneous beliefs about the world that affect Haskell's use in practice.
To take the discussion up a level, Rogers's factors in the diffusion of innovations are (perceived) "relative advantage", "compatibility", "complexity", "trialability, "reinvention potential", and "observed effects". If we accept this model, the failed diffusion of an innovation such as REST or Haskell doesn't necessarily imply that it has little relative advantage, or even little perceived relative advantage; it might be that they're incompatible with other established practices, are difficult to learn ("complex"), require heavy up-front commitment ("trialability"), hard to repurpose for unintended uses ("reinvention"), or hard to observe the use of.
In fact, the diffusion literature consists almost entirely of research on diffusing innovations that had great difficulty diffusing despite having dramatic relative advantages, at least according to the authors.
That still doesn't mean we should comfortably dismiss assertions that one or another innovation doesn't actually confer a relative advantage.