You don't have to use a growth factor of 2. Any constant multiple of the current size will give you amortized constant complexity.
> On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.
If fragmentation is a concern, you can pre-allocate a fixed capacity. Or you can use a small-block allocator that avoids arbitrary fragmentation at some relatively minor cost in wasted space.
> Why copy these objects and incur that cost if you can choose a datastructure that meets their requirements without this behaviour.
We have an intuition that moving stuff around in memory is slow, but copying a big contiguous block of memory is quite faster on most CPUs. The cost of doing that is likely to be lower than the cost of cache misses by using some non-contiguous collection like a linked list.
> as its really not suited to the constraints of games and the systems they run on.
For what it's worth, I was a senior software engineer at EA and shipped games on the DS, NGC, PS2, Xbox, X360, and PC.