Copilot is so much beyond regular autocomplete that it's playing a completely different game.
I've been using it today while writing a recursive descent parser for a new toy language. I built out the AST in a separate module, and implemented a few productions and tests.
For all subsequent tests, I'm able to name the test and ask Copilot to write it. It will write out a snippet in my custom language, the code to parse that snippet, and construct the AST that my parser should be producing, then assert that my output actually does match. It does this with about 80% accuracy. The result is that writing the tests to verify my parser takes easily 25% of the time that it has when I've done this by hand.
In general, this is where I have found Copilot really shines: tests are important but boring and repetitive and so often don't get written. Copilot has a good enough understanding of your code to accurately produce tests based on the method name. So rather than slogging through copy paste for all the edge cases, you can just give it one example and let it extrapolate from there.
It can even fill in gaps in your test coverage: give it a @Test/#[test] as input and it will frequently invent a test case that covers something that no test above it does.