ASI still runs at finite speed and is limited by its hardware, and speed of its interactions with the real world. It won’t be able to recursively improve itself overnight if it only generates 10 tokens per seconds, and a second company could very well train one of its own before the first one has time to do much.
You're not thinking of the second order meta system here. ASI isn't just one instance of an LLM responding to you in a session. It's the datacenter full of millions of LLM interacting with millions in parallel.