Copilot has been so-so in my experience but I still use it very often to infer Typescript types automatically (the VS Code Cmd + I shortcut is excellent!).
To me, it felt like Cursor blurred that line too much for my liking. The way I use LLMs is by giving it atomic, independant chunks of code that I need to review / refactor which, in my experience, leads to far superior output. I'm sure there's some way to make it work with Cursor but it just didn't click for me.
So now that Claud is coming to Copilot, I don't think there's any reason to consider Cursor.
Also entirely anecdotally, the newer multi-modal features added to the OpenAI models have _seemed_ to significantly degrade its other capabilities especially coding in languages other than Python and TypeScript and _seems_ to be more repetitive in its answers (likely to get stuck repeating the same incorrect information even after a correction). This could absolutely be a sampling or task bias so your mileage may vary.
I've still found Github Copilot to be useful for VERY SHORT look-ahead/completion but it has almost always assumed too much in the very wrong direction for more than about a line. I haven't tried the Claude version of Copilot but I'm absolutely switching over to it.
I have hope that `copilot-instructions.md`^1 will improve this!