If I decide that a problem is so hard that the best solution is to invent a superhuman AI to solve it, then this is an approach that human-level intelligence can come up with, so a superhuman intelligence can too.
Self-improvement and self-replacement are probably not an AI's actual goal, they're just things that are useful to most potential goals that an AI can have. (And they're easier for the potential AI because the prerequisite research has already been done at that point.)
(If you knew I was trying to either cure cancer or colonize mars, you could predict that I'll start raising money, even though those goals don't have much in common.)