Well, to the extent that a human-level intelligence is an individual, anyway. We ourselves are probably a mixture-of-experts in some sense.
Also, for the purposes of talking about the phenomenon of recursive self-improvement, individual vs society isn't the end of analysis. Part of the reason AI recursive self-improvement is concerning is that people are worried about it happening on much faster than societal timescales, in ways that are not socially tractable like human societies are (e.g. if our society is "improving" in a way we don't like, we or other humans can intervene to prevent, alter, or mitigate it). It's also important to note that when we're talking about "recursive self-improvement" when it comes to AI, the "self" is not a single software artifact like Llama-70B. The "self" is AI in general, and the most common proposed mechanism is that an AI is better than us at designing and building AIs, and the resulting AI it makes us even better at designing and building AIs.
Though… still don't think it's true. Isn't "society is self improving" what they call Whig history?
I.e. "You can only use the memory which you currently use" would be a weird artificial constraint not relevant in practice.