Never ever I'm risking breaking copyright, and I also don't like Microsoft not including their own code in the model.
From allegations I have read across the Internet, Microsoft might be doing those who use Copilot a favor.
[0]: https://www.theverge.com/2022/11/8/23446821/microsoft-openai...
This is a serious question, I'm apparently just unaware of the horror-stories that can come out of breaking copyright. (Not from the US so..)
For me its most useful for helping with bash scripts and small simple stuff, just saves a huge amount of time I would spend googling and checking small things. Not sure if copyright is relevant there or not, it certainly isn't something I am worried about. Interested to hear what your fears are based on.
If you use code or art generated by an AI that was regurgitating training data, you can be sued for copyright infringement.
The way that AI gets training data these days is... questionably ethical. It's all scraped off the web, because the people who make these AIs saw court precedent for things like Google Books being fair use and assumed it would apply to data mining[1]. Problem is, that does nothing for the people actually using the AI to generate what they thought were novel code sequences or images, because fair use is not transitive.
[0] This won't work in current Copilot because a) I'm misremembering the comment phrasing and b) they explicitly banned that input from generating that output.
[1] In the EU, this practice is explicitly legal
"GitHub Copilot blocks your ability to learn." Is a common refrain.
I don't see ANY industry-wide consensus on whether GitHub Copilot truly helps developers right now.
The only scenario I can get anyone to agree on is generating templates. Aka, JSON or CSS files that you then edit.
Every time I don't have to context switch to look up some technical errata in my browser is a complete win for me.
- learning a new code base
- learning a new (popular) library
- learning a new language
You could compare it to, say, eslint but for other languages .
What is an idiomatic way of X with Y?
Well, copilot will give you the answer in 10-30 seconds less time than opening a browser and searching.
Im not going to argue it’s legal merits, but as a learning tool, it’s very much like having a smart linter.
The larger scale code generation is less obviously useful and usually wrong, I agree.
On the contrary, it frequently suggests code that adhere to better practices.
I haven't wanted to use it personally. I'm also not a senior dev or anything so my opinion is not worth much.
I've never heard anyone claim it blocks the ability to learn - if anything it's the opposite. Many people like how it shows you APIs you weren't aware of.
But I did find that I need to turn it off at certain stages when learning (or re-learning) a programming language. It's seemed counterproductive until you have a good grasp on the basic syntax of the language. But the "showing you new APIs" does seem to be a thing that actually helps.
In general, you should not be accepting code completions that you don't understand. I'm usually stricter, in that I only typically accept completions that line up with what I was planning to type anyway.
Others have already pointed out the case as a reply.
In my experience, it's a quality of life improvement, but the things that dictate actual time to market is bottlenecked by things that aren't solved by copilot, such as overall design, decision making, requirements gathering, code structure/architecture, solution ideation, user acceptance, infrastructure setup, etc.
I think if it eventually could help with those other tasks, you'd see time to market gains, and that would start to make it really valuable.