> in a highly controlled and predictable environment
Why this constraint? A common sentiment I see online (sorry, to group you in) is "[tool] will be capable, actually, but only in a context that trivializes its usefulness."
I think modern post-training like RLVR + inference-time output token scaling can _probably_ scale so the agents can solve any computable task, even when placed in noisy or misconfigured environments. But it won't be economical for a long while. But it already seems largely capable of that today.