Put another way, you asking GPT for stuff that it learned from Stack Overflow: good. Using it to post to Stack Overflow: bad.
As programmers we learn that adding a comment like:
// The limit is 16
to const SOME_LIMIT = 16
is bad because is redundant information that serves no purpose to the reader and can easily misalign in the future.So what's a good commit message for changing this limit? Ideally we want to describe why we've changed it but this information isn't always available so even when we're avoiding redundant comments we often use redundant commit messages like "increased SOME_LIMIT" to make browsing through history easier for others.
As we do not need to provide this information (it is already in the code), it seems like a reasonable idea for an AI to help us provide it.
In contrast, commit messages often stand alone: If you browse the history, you only see the messages, but now a large number of them; if a commit changes more than one file, the commit message has to sum up the changes from all files.
In all those contexts, a simple, high-level description of what has changed can be enormously helpful.
I struggle to imagine situation in which this is the case. Surely, even in the worst case of you being told to make a particular change with no explanation given, you can at least drop a "increased from 5 at a request of ${name of your boss}", or "increased from 5, see ticket #${ticket number}" in a comment, and/or a commit message.
You think? For some programmers writing commit messages is like ... i don't know because i'm not one of them... some kind of torture?. I bet the kind of person who likes this service would otherwise put in blank commit messages or at best ticket IDs.
If ChatGPT could change that to something like "disable current limits" or "disable safety checks" or whatever that might be marginally better.
Maybe prefixing them all with gpt: would help
It's just the same thing as with comments and "self-documenting code". The code tells you what (and if written carefully, it may be even somewhat effective at it). It can't tell you why. Neither can a GPT-3 summary of it.
I agree with you, but I'm assuming this could just send a diff and that context would be small enough to not leak.
Then again, if GPT can keep track of all the diffs...
As pointed out by other comments, the commit message should be telling you facts about the change that are not evident from the change itself. GPT-3 can't tell readers why the change happened.
Taking a step back and thinking about what I have actually done often helps me to find misconceptions, the worst bugs of them all.
Automating this away would be like learning a foreign language by pasting book exercises into a translation app... you may get good grades, but does it help your understanding if you didn't put in the effort yourself?
https://mcmansionhell.com/post/618938984050147328/coronagrif...
I think the same phenomenon is at play here. Everybody sharing their own silly parrot tricks: it's the least interesting topic in the world right now.
- in Demo 1 tool wrote "Switch to colored output..." while in the diff we can see that colored output was already present;
- in Demo 3 tool wrote "Add installation options and demo link to README", while in the actuall diff we only see a link being added, no changes to installation options.
Props to the author for being honest and not cherry-picking the examples.
… and those who tag you as a reviewer on +8,298, -1,943 commits/PRs with the commit message "JIRA-PROJ-84138".
At my workplaces, we've told people who do this to break up their larger commit into smaller ones before reviewing. If they haven't done that initially, well, their life is going to get harder for a few days.
This happens in environments where it takes hours for CI to let your change pass, making small commits prohibitively expensive in terms of time and infrastructure.
(And yes, I know the answer is: make it so CI that's part of review takes minutes, not hours.)
I even wrote an IntelliJ IDEA plugin 9 years ago [2]. Half as a joke, half to learn about IDEA plugin development. I'm puzzled by seing so many people actually using it. Last month the HTTP link became invalid, and soon after someone opened a PR with a fix. I really hope noone actually uses those commit messages on shared repositories.
[1] https://whatthecommit.com/
[2] https://darekkay.com/blog/what-the-commit-plugin-for-intelli...
A lot of the commit messages were typical and sort of redundant but this one stood out to me https://github.com/zurawiki/gptcommit/commit/82294555e7269e6...
"Add github token to address GH Workflow rate limits"
This is a good commit message, it describes a problem and a solution. I'd be very impressed if the GPTCommit tool wrote this and knew why the github token was being added.
2. If GPT-3 can write commit messages even close to as clear as you can, you're doing something wrong.
But my main though is that IDK about using this for anything closed source. Feed openai's API your codebase, one commit at a time. Even if they promise not to train on your prompt history today, ToS could change. Seems fine if you run it locally though.
Would also be cool to generate commit messages while viewing history, it could really do a good job of orienting you. I'm imagining "human commit msg | gpt commit msg" so you can look at both. It's a little simplistic right now, kinda just describes the diff, but GPT-3.2 could rock.
This is far from horrible.
If I wrote the code, writing a commit message is trivial.
A readable summary for the ones who may not understand code - your developer will never write that.
This is amazing. Humans should only need to read commit messages, never write them.