I think it's important to know that other boundaries exist, at least conceptually, but, ultimately, it's the practical considerations that are important. For example, for what you listed, what would you say are the storage cutoffs for each tier today?
> data problems
I find there is also, occasionally, a lack of awareness of when a problem is data-heavy, encouraged by abstraction layers like ORMs.
This can lead to casual or naive (neither meant derogatorily) distributed computing, where the app hoovers up the data from database to be processed. This can be great for anything CPU-intensive but terrible for the I/O-intensive.
> Fits on one Big Hardware (Mainframe or other Specialized equipment)
I don't think this really exists today, unless you're including an otherwise commodity high-end (e.g. 8-socket) server that carries up to a 4x price premium in "specialized equipment".
I'm aware that mainframes still exist, but, for a variety of reasons [1], I'd consider them as being in a world of their own, rather than a step on this continuum.
[1] e.g. inherently distributed architecture, not obvious if storage scale actually greater than high-end commodity, interop issues