It doesn't. It makes the consequences more dramatic, if it (accidentally, even!) works out how to create its own successor, because at that point the genie is out of the bottle and you won't get it back in.
Alpaca rides on LLaMA. And LLaMA was trained on 1T tokens for a long time. The fine-tuning takes one hour with low rank adaptation. But pre-training a new model takes months.
Yeah maybe there is that possibility, but there is the possibility of a person doing that too. GPT-4 is probably less probably to be able to do that than a person.