The source array might be considered to have even offsets (0, 2, 4, etc, index left shift 1 bit) relative to the double-sized array.
Addition operations past the halfway value of the smaller array have a relation to values in the second by (Index << 1) % n + 1 ; that is they pair with the values from the start of the smaller array.
When a removal operation, like pop(), selects a victim in the source array the corresponding operations must also apply in sync to the values in the larger array.
Let me walk through a simple version based on the description above, ignoring that it's rather inefficient:
Start with an array of size 64, with 32 elements.
Our first action is either a push or a pop. I'll split those up.
* * * If the first action is a push:
From here, call the existing array Small and make a new 128 element array called Big.
For the first push, store it in Small[32]. Then copy Small[0] and Small[1] to Big[0] and Big[1].
For the second push, store it in Small[33]. Then copy Small[2] and Small[3] to Big[2] and Big[3].
Then for a pop, just remove the element in Small[33].
Then pushing again, store it in Small[33]. Then copy Small[2] and Small[3] to Big[2] and Big[3]. Notice how this is exactly the same as the previous push. And if you undid two pushes, then did two more pushes, both of them would be the same as they were before.
So, analyzing this, once we push once, any arbitrary sequence of pushes and pops is going to do one of the following:
* Pushes is always more than pops, but the difference is always under 32. This bounces around forever, undoing and redoing pushes. This is valid, and always uses the same amount of space for 33-63 elements, so that's clearly linear too.
* Eventually pops catches up to pushes. This means we're back to 32 elements, right where we started. Throw out the 'Big' array too. Going back where we started is valid and uses linear space. If the next action is a push, go to the start of the push instructions. If it's a pop, go to the start of the pop instructions.
* Eventually pushes minus pops reaches 32. So now our Small array is completely full, and our Big array is half full, and they contain exactly the same data. Throw out the Small array. Now we're back where we started, except with twice as many elements in an array twice as big. Use all the same logic as before, but with 2x numbers. This is valid and uses linear space.
* * * If the first action is a pop:
From here, call the existing array Big and make a new 32 element array called Small.
For the first pop, copy Big[0] to Small[0]. Then delete and return Big[31].
For the second pop, copy Big[1] to Small[1]. Then delete and return Big[30].
If we get a push, put it in Big[30].
If there's another pop, copy Big[1] to Small[1]. Then delete and return Big[30]. Notice how this is exactly the same as the previous pop. And if you undid two pops, then did two more pops, both of them would be the same as they were before.
So, analyzing this, once we pop once, any arbitrary sequence of pops and pushes is going to do one of the following:
* Pops are always more than pushes, but the difference is always under 16. This bounces around forever, undoing and redoing pops. This is valid, and always uses the same amount of space for 17-31 elements, so that's clearly linear too.
* Eventually pushes catch up to pops. This means we're back to 32 elements, right where we started. Throw out the 'Small' array too. Going back where we started is valid and uses linear space. If the next action is a push, go to the start of the push instructions. If it's a pop, go to the start of the pop instructions.
* Eventually pops minus pushes reaches 16. So now our Small array is half full, and our Big array is one quarter full, and they contain exactly the same data. Throw out the Big array. Now we're back where we started, except with half as many elements in an array half as big. Use all the same logic as before, but with numbers cut by 2x. This is valid and uses linear space.
There.
Now, there are easy optimizations that could be done on top of that to cut the memory use in half, or combine pushing and popping into the same logic, or all sorts of other improvements.
And you want to have a special case if the size gets too small to stop shrinking.
But that should be a perfectly good basic explanation of an algorithm that's very straightforward and has no wiggle room for anything to go wrong.