Obviously maths with the smaller representations will be quicker than with this array representation, so the interpreter does some work to try and use smaller representations where possible. But if you tried to, say, add two 64-bit signed ints together, and the result would overflow, then the interpreter will transparently convert the integers into the array representation for you, so that the overflow doesn't happen.
So the first poster said that the default merge sort implementation on Wikipedia was buggy, because it doesn't protect against overflows (assuming that the implementation used fixed-sized integers). The second poster pointed out that if the implementation used these arbitrary precision integers, then there is no chance of overflow, and the code will always work as expected.
You can look up "bigint" which seems to be the term of art for implementations of arbitrary precision integers in most languages. You can also read a bit about how they're implement in Python here: https://tenthousandmeters.com/blog/python-behind-the-scenes-...