Right, I see what you're getting at. I do agree that AI systems will need to be able to use oracles, "current weather" is a great example of something a human also looks up.
The reason I want the model itself to learn Physics, Maths, etc. is that I think it is going to end up being critical to the challenge of actually developing logical reasoning on a par or above humans, and to gain a true "embodied understanding" of the real world.
But yeah, it would be nice to have your architecture support updating facts without full retraining. One approach is to use an oracle as you note. Another would be to have systems do some sort of online learning (as humans do). (Why not both?) The advantage of the latter approach is that it allows an agent to deeply update their model of the world in response to new facts, as humans can sometimes do. Anything that I'm just pulling statelessly from an oracle cannot update the rest of my "mind". But this is perhaps a bit speculative; I agree in the short-term at least we'll see better performance with hybrid LLM + Oracle models. (As I noted, LaMDA already does this.)
However I think that a big part of Wolfram's argument in the OP is that he thinks that an LLM can't learn Physics or Maths, or reliably learn static facts that a smart human might have memorized like distances between cities. And that's the position I was really trying to argue against. I think more scale and more data likely gets us way further than Wolfram wants to give credit for.