Lol no. Chinese AIs are definitely not "near-equivalent-capability". The empirical proof is pretty obvious: how many people have you heard talking about using their codex/claude code subscription vs their z.ai or qwen subscription? Moreover even the Chinese models require epic amounts of GPUs to run the full version, eg. https://apxml.com/models/glm-51 needs 1515 GB to run, and that's with a measly 1024 token context. To get it to run on your "$1k GPU" you'd need to quantize it, making it even dumber.