"This" does a lot of unjustifiable work here. "This" refers to your successful experience which, I assume, involved a program no larger than a few tens of thousands lines of code, if that, and it saved you only a few hours of work. The future you're referring to, however, is an extrapolation of "this", where a program writes arbitrary programs for us. Is that future inevitable? Possibly, but it's not quite "this", as we can't yet do that, we don't know when we'll be able to, and we don't know that LLMs are what gets us there.
But If we're extrapolating from relatively minor things we can do today to big things we could do in the future, I would say that you're thinking too small. If program X could write program Y for us, for some arbitrary Y, why would we want Y in the first place? If we're dreaming about what may be possible, why would we need any program at all other than X? Saying that that is the inevitable future sounds to me like someone, at the advent of machines, saying that a future where machines automatically clean the streets after our horses is the inevitable future, or perhaps one where we're carried everywhere on conveyor belts. Focusing on LLMs is like such a person saying that in the future, everything will inevitably be powered by steam engines. In the end, horses were replaced wholesale, but not by conveyor belts, and while automation carried on, it wasn't the steam engine that powered most of it.