We recently had someone open a pull request to our open source project and the code and the explanation of the code was clearly AI generated. It was obvious that the code doesn't work and the person had not tested the code. We do not know what the end goal of the person was but we confronted the person and closed the pull requests.
Has any other open source projects experienced this? What did you do?
At a glance, it looks like it's been mostly well-received and has not yet been immediately closed as spam.
In our project, the user just copy-pasted the output from the AI tool and called it a day. They did not even bother to build the project and test it.
I have also started using AI tools and it has made me much more efficient. I could have done a task without AI but with it, it is much more faster.
My ChatGPT workflow is give requirements -> have it create unit tests -> give it test results until it passes.
Been playing around with a generated-code-only project: https://github.com/JerkyTreats/scrivr/
In that workflow I don't really look closely at the code. In most cases I've found it isn't really necessary.
It can help set some context to the discussions.
TLDR:
Recently, a person has been using AI tools to generate code and open pull requests to open source projects I contribute to.
The code is entirely wrong and doesn’t work, and it is evident that the person making these pull requests doesn’t understand the code.
The person also copied explanations (which was an obvious giveaway as it sounded like a typical <popular AI tool> response) into the pull request and attempted to explain the code and answer questions from the reviewers.
We were polite and when it didn’t work, reported the person to GitHub.
I don’t want to shame the person publicly. But I want to make other open source maintainers aware that this is a thing and prevent them from wasting time and effort chasing such people down.
Maybe this was a naïve attempt at inflating their GitHub numbers? Some people use those as a credibility measure when applying to jobs or getting clients.
Devs who think they're "optimising their workflow" with AI are in lala land. Understand that soon you're not going to be needed to prompt ChatGPT to do your work for you. You're not going to be a 10x engineer, you're going to be an unemployed one. You're current workflow of copying ACs into ChatGPT then copying the output into VSCode will also be automated soon, obviously.
Learn hard skills now as it will buy you a few years. We have all been given a death sentence and most people are still yet to understand this. It's unlikely that in the future we're building humans will be needed, let alone our inefficient labour.
Good AI systems will do all the above.
Sorry to plug, but if you're a developer interested in building on top of langchain and building similar tech, please email me (in my profile). I'm a senior developer looking to collab.
So tell your maintainers to use that button more liberally -- it mostly just exists to save GitHub money / discourage these attacks. It doesn't hurt to click it for these "CV improvement" spam PRs, and it makes rejecting the PR a lot simpler if there's a red X.
I usually just scan file list changed by the PR, and if it isn't changing CI stuff, I just let the actions run prior to the actual code review.
Formerly "hand wavy" questions about humanity, cognition, awareness are now showing up right in front of us. They are transforming into things like (a) is this PR worth my time? (b) does it introduce legal / license risk? (c) what principles were considered during its creation and so on.
I think one reason that open source even kind of works is because "people contributing to open source" has a shit-ton of positive selection bias "baked in". The type of person liable to open a PR to an open source project is probably several standard deviations (or, at least one, right?) above the average developer. So that probably has the general effect of making reviewing a random cold-open PR less onerous of a task.
But if we cross into a world where the average value of a PR opened with a project drops substantially - either due to AI or due to a permanent advertising campaign from some company rewarding badges for opening open-source PRs - I wouldn't be surprised to see lots of open projects close/ignore github PRs and start doing something that looks more like how Linux handles it, where it's a lot more social-based and puts some of that positive-selection-bias filter back in place.
I wrote more about what actually happened here: https://navendu.me/posts/ai-generated-spam-prs/
It can help set some context to the discussions.