You know what else doesn't make sense in arithmetic? Computation. I think it is important to wonder how come mathematical definitions of computation didn't come about until the 20th century[1] even though math has had most of its big breakthroughs (to date) by then (or, at least, there was no revolution in mathematics in the 20th century on the same scale as there was in physics). And yet, as advanced as math was, computation was only defined very late in the game. The reason is that, perhaps ironically, the concept of computation is absent from most of math, which is "equational".
But computation is the very core of computer science. So, for example, in arithmetic, 4 and 2+2 are the same thing because 4 = 2 + 2. So in arithmetic, equality means "sameness". But in computer science they are most certainly not the same, because the process of moving from one of the representation to the other is precisely what a computation does. Not only that, the process isn't even necessarily symmetrical, as moving in one direction may entail a much higher computational complexity than going in the other direction.
Any distinction between calculations and effect (other than actual IO) is, therefore, arbitrary. Whether or not the particular distinctions Haskell makes (where waiting for one second may or may not be an effect depending on how it is implemented) in the form Haskell makes it -- using pure functions and equational reasoning and describing effects with monads -- is actually useful and beneficial is a question for psychologists. Given Haskell's design, monads are certainly useful in Haskell, but that utility can't be generalized to languages with other designs.