I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
Using AI tools to code and then hiding that is unethical imo.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
The gray area is in the gap of "how much" help is given. Does a tool that does most of the coding, thinking and implementation for you still count as your work if you gave it the goal, guidance and architecture? Yes. And what I want to know as the peer of the person using such a tool, call it AI, is to what degree it did the work and how much of it you validated.
Since the other tools pre-AI were more validated by the humans using them, I as the peer know the outputs pass a basic level of quality. I know the human was mostly involved in their production and creation. AI breaks this assumption and now many humans are producing outputs that require an unpredictable level of review by peers - as this is a big change in the norms of our society, I think its OK to call it out as a requirement to label output as AI-assisted/generated and/or to specify how much and how AI was involved and call it unethical if not done so. (I think it's not necessary or helpful to call out the use of the pre-AI era tools as unethical or a lie, even though they are too).
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
Of course most people don’t do that
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
"There is no commit by an agent user, for two reasons:
* If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
* I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.Interested to read opinions on this approach.
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.