Yes.
> multiple agents model is able to process more context at each steps of the reasoning chain
What?
How can a multi agent model have more context at a single step? The single step runs on a single agent. It would literally the same as a single agent?
The multi agent approach is simply packaging up different “personas” for single steps; and yes, it is entirely reasonable to assume that given N configurations for an agent (different props, different temp, different models even) you would see emergent behaviour that a single agent wouldn’t.
For example, you might have a “creative agent” to scaffold something and a “conservative” agent to fix syntax errors.
…but what are you talking about with different context sizes? I think you’re mixing domain terms; context is the input to an LLM. I don’t know what you’re referring to, but multi agent setups make absolutely no difference to the context size.