You kind of got me thinking now. The decimal representation of a number is really a string representation (in the sense of a certain sequence of characters). Hence rounding to a certain decimal is essentially a string operation. You can of course do it by (say) dividing by 10^whatever or something else in some numerical fashion, but the more I think about it, the more natural it is to just think of the whole thing as a string.
Or you could flip it around and consider that the string manipulation can also be described numerically so whether you consider the operation as a string operation or a numerical operation is sort irrelevant. It's just a point of view.