I'm not an expert in ML theory, but from my perspective, I think more compute would cause better fitting for a model, but possibly overfitting if more parameters are not added. For some problem space, adding additional parameters introduces another degree of freedom, which should allow a larger domain for the inputs mapping to outputs (more answers to questions). And if we define AGI as a network that can answer questions for n > 2 domains (e.g. can we do image classification and a chat bot and synthesize them into a coherent system passes a Turing test), then more parameters makes sense to increase the range of outputs.
Interestingly, I don't think it's clear on how parameters in the network and compute would clearly find a domain within a combined problem space, where the mapping from question to answer will give sensible results. It seems like we need more tools to extend ML.