Yes, an LLM can be periodically retrained, which is what is being done today, but a human level AGI needs to be able to learn continuously.
If we're trying something new and make a mistake, then we need to seamlessly learn from the mistake and continue - explore the problem and learn from successes and failures. It wouldn't be much use if your "AGI" intern stopped at it's first mistake and said "I'll be back in 6 months after I've been retrained not to make THAT mistake".