Reality is never magical, by definition. Magic just means that we are using something without understanding it.
Whatever our brains are doing internally isn't magical. But it's magic to us because we don't know how it works. So too with current LLMs.
My point is we're already doing things with LLMs that we don't understand and that we didn't think were attainable until two years ago. We don't know how to do superintelligence and recursive self-improvement... but we're off in uncharted territory already, and I think there's a lot more grounds for positive uncertainty about self-improvement than there was before GPT-3.