Most of the tests for and new functionality alone in any of our PRs well exceed 50 lines. Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
It’s all ridiculous. Just make PRs that represent unit change, whatever the means to you. The unit functionality is independent of lines of code. Sometimes that is 12 lines, sometimes it’s 800. Yeah large PRs are harder to understand but that’s why you have tests. Also XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky.
If it's because those PR's are getting reviewed more thoroughly, that's likely a great use of time.
If it's because the small PR's are getting bikeshedded, it's not.
If it's because of tooling, then the OP at graphite.dev has something to sell you. I'd be buying but we're a GitLab shop.
> Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
I find that test code is rarely reviewed properly. If you want it reviewed properly, then create MR's that are just tests. You can create an MR with the change and happy path code, and then subsequent MR's with each of the rest of the tests.
If you're fine with tests not getting the same level of attention (and that's likely OK), then tests don't count as part of the 50 lines, IMO.
One PR has 1 context switch. 10 PRs have 10.
To say nothing of the cascading effects of CR on early PRs. Oh, you want me to rename a variable that I used in the first PR? Looks like I'm spending the next 10 minutes combing through my other 9 diffs updating names. That would have been a 5 second refactor if I only had a single monolithic PR.
I still think 600 is a lot, but there's a happy medium between "minimal change that excluded broader context" and "monster PR that has all context, all changes, and eats left-pad packages for breakfast". Line count shouldn't be the goal. The goal should be making as small a change possible that is self-contained (ie all context is either within the PR or already in the codebase), and (IMO) that can be delployed to production as-is.
Sure.
How many times is the code touched after the PR has been merged is a better question.
Faster is not better. Better is not faster.
Which is in my view a very silly claim devoid of any context.
> large PRs are harder to understand but that’s why you have tests
I'd rather have comprehensible, well-reviewed code than test coverage (I'd rather have both, of course). We've got a mission critical module that is a crusty piece of junk that everyone is scared to change because of how much of a mess it is, even with its amazing test coverage (frankly the code is so bad that I assume the tests are just as bad anyway).
> XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky
I've never seen an XXL PR that improved productivity. They pile up tech debt, bugs, and down time. I don't care how fast the devs are moving if they bring prod down every week. That is absolutely unacceptable.
On a team of superstar, perfect developers that all understand the code base perfectly, I'd probably agree with you. If that's your situation then I envy you :) But I've never been on such a team.
As a rule messy code a way more easier to cleanup in one big chunk. Teams with CR limit prefer not to touch such nests.
I guess, what I'm saying is, it depends on the team, on their seniority and capabilities, on the product, the culture, etc.
> Reviewing and merging 10x60 line PRs is, in my experience, more time intensive than reviewing one 600 line PR.
The article disagrees, and the article has data:
> Some folks might wonder if writing small PRs leads to less code output in total. We all want to be high velocity, but there are times when you need to be high volume. Consistently writing sub-20 line PRs will have a significant impact on your net coding ability - but interestingly, so will writing PRs greater than 100+ lines. The highest volume coders and repositories have a median change size of only 40-80 lines. I suspect this is because the volume of code is change-size * speed of changes. Too small of changes, and the faster merge time doesn't make up for it. Too large, and you start getting weighted down by slower review and merge cycles.
I agree that you shouldn't be dogmatic about PR size - there's an art to engineering as much as it's a science. But part of that art might also be recognizing that "unit change, whatever that means to you" is - as you suggest - a flexible concept. The takeaway for me here is, given what the data shows, when you end up with a 600-line unit it might be worthwhile to try to, if possible, find a way to break that unit down.
How does that data prove anything? The ideal PR size is 50 lines because that’s the median size on Github. Seem like a pretty worthless claim.
I briefly worked with a hard line-count-limit for PRs and I thought it made everything much worse. It is fine for changes that are actually small, but when you need to go back and re-open 4, 5 merged PRs in different tabs to get the full context again, the time to review increases tenfold with tiny PRs that don't really make complete sense by themselves.
I have worked with co-workers that have the complete opposite preference, though, and anything over a set amount of lines wouldn't even be reviewed.
Interesting to see the numbers on the article, however. My anecdotal experience would make me guess the opposite. I feel like work slows to a crawl once the PRs are artificially limited and broekn up like that, specially in fast moving companies and startups.
I feel that in the last decade somehow there has been a loss of context of what is the purpose of version control.
Why do we bother with version control? Sure, there are many reasons.
But a primary reason is that if we discover behavior X turns out to cause regressions (or is otherwise a bad idea), we can easily revert it by reverting its commit.
That's a why a single commit should contain a single behavior change, and contain the entirety of that single behavior change.
If the change is split among multiple commits, it'll be a pain to revert it. If the change is contained within a commit that changes other things, it'll be an ever larger pain to revert it.
So that's my rule. A commit is single behavior change, all of it, and nothing else.
You can't express that in lines. Might be 1 line might be lots.
That being said many companies (including some FAANG) use commit counts as a performance metric, thereby directly disincentivizing clean, readable commits.
You can't really say much about the first 4 PRs because if you don't know how that code is going to be used, it's kind of hard to giving meaningful feedback. By PR 5, it's too late to give feedback on PRs 1-4.
I also have worked with people that prefer that but I never understood it. Code without context isn't reviewable imo
It’s not always reasonable to do that, but you can often deliver incremental changes where each incremental change either brings a whole feature visibly closer to completion, or provides some self-contained utility, or makes some refactoring changes with the promise of “this will make it possible to do X, later”.
I just try to be reasonable about PRs. Everyone will have their preference, I just think as long as you can go back and figure out what happened, why the change was made, and do it quickly enough, then you're golden.
But this might only be a problem with those of us working on legacy codebases. The kind of PRs I see in OSS projects I could review 1000 lines at a time - it's so clean!
For my part, ever since I was a wee lad learning to program for the first time, I've been of the opinion that line counts themselves are a largely worthless metric for anything.
How many lines code takes up is too variable between languages, too variable within the same language, too dependent on irrelevancies like formatting, etc.
What actually matters is logical and conceptual complexity, not line counts.
Those small PRs might not be reverted, but there are probably a fair number of 50 line PR follow ups that end up modifying the original implementation, and using up more total time and effort than if it was just a larger PR implementing the feature in its entirety.
I wonder how many of the large PRs they say take longer to merge and get less comments are actually feature branches into which the smaller PRs are merged, and which we expect to hang open for long periods of time with little interaction.
This analysis seems to take a set of data without much investigation into what the data actually represents and makes an immense effort to find some conclusions about the context-less data.
The “ideals” are almost never a thing I see in my daily job, and I’ve worked for countless companies as a contractor.
For me, I feel like the difficulty score of performing an effective review is slightly greater than nlogn but less than n*root(n).
300 is a bit beyond my favorite limit of 200.
> Our sample set
> All of the data-based statements in this piece are made using private and public PRs and repos that have been synced with Graphite.
If CI takes 15 minutes and I create PRs one by one, I’ll need to wait a total of 10 hours just for CI, which is impossible, I’ll need to use stacked PRs. If I’m using stacked PRs, that means I do all the work up front, and then try to painstakingly split up my final code into little 50 line chunks that build incrementally. I’m sure someone will say “oh just write incremental small commits to begin with”, but that never seems viable to me. Code and design evolve as I build in ways that would make the review of the first 50 lines I commit on the project pointless since none were retained in the final product. So not only do I have to build this feature, and write 40 PR descriptions, and write an overall meta-PR-description for the change in aggregate, I also need to cut my finished code up into puzzle pieces for reviewers to mentally re-assemble.
Anyways, maybe for maintaining mostly-done features 50 lines is nice, but I’d much rather a new feature or larger change be a handful (1-3) of larger (500-2000 line) PRs possibly accompanied by pair-review/walkthrough discussion.
Now, yes, it can be a bit more work to split up big PRs into smaller standalone changes that can be merged in isolation, but it seems plausible that the benefits might outweigh the extra work in many cases. The whole idea of a version control system is to create a useful history of changes over time; making each individual changeset simpler and easier to understand seems like an admirable goal.
I can make hundreds of 2 line PRs fixing typos and reformatting text or whatever else low stake change that are as small as this.
So, yeah, obviously those low-stake changes that don't really do much will be quick to review and merge. Does that make them any better? Why are we optimizing for review speed alone in the first place?
It's just silly statistics being thrown around. There's no correlation in lines of code or review speeds for value added.
A larger PR that actually does something and requires more thorough review will inevitably take longer. Does that mean it would be better to have 10 separate PRs with no connection between them, different reviewers and 10x the feedback loops? This for me is missing the forest for the trees.
This honestly reminds me of way back when Agile/Scrum was the new hot thing, and my company implemented it where we ended up having 10x the stories taking twice the time.
Local optimization on these sorts of metrics are pretty much always going to lead to perverse outcomes.
If you can factor out that big long function into ten smaller functions that are each useful in isolation, then your code is probably going to be a lot easier to understand, regardless of how your structure your commit history. Future code reuse is also easier. Testing each function in isolation may be simpler as well.
Once you do all of that, putting each of the standalone functions in its own commit is trivial, and each commit can have a more focused commit message. Reviews are easier too, since you can easily view each function and its associated docs/tests/etc separately.
Local optimization on these sorts of metrics are pretty much always going to lead to perverse outcomes.
Again, this is a straw man. Nobody is saying to blindly optimize to the metrics, or that this is a black-and-white thing where every commit needs to fit within a hard line limit. But, all else being equal, I've personally found that my code quality improves when I try to break things into smaller independent changes. I also find it much easier to review PRs where other people do that.
Same rat brain muscle that made me buy an extra 200,000 dollars of stuff I didnt really need on my home, but I spent 6 days at Home Depot fighting over which color of mustard to paint the guest bathroom
And this is the classic theme behind bikeshedding. People can understand that small PR at a glance, and there's always multiple ways of doing everything. There's a perverse desire to demonstrate productivity, and nagging at small PRs is a way to do that without a lot of effort; ironically reducing actual productivity, see: Cobra Effect.
That big PR though? Saying something intelligible about it requires serious effort. In a low-skill environment, nobody will touch the PR for fear of demonstrating that they don't know what they're talking about.
I disagree with the premise of the article. The ideal PR is a well-encapsulated unit of work. Sometimes you need to fix a typo or handle an edge case, and you make a 1-line PR. Sometimes you need to add a new module and you add 3 thousand-line files and put little hooks in 30 more files.
Reviewing a 50-line code change in 60% of the time it takes to review a 250-line code change means the shorter code review takes _four times_ as long to review per line.
So not only is it more time per line, it's also more lines.
Basically the only benefit is PRs per day, which is... not useful in itself.
You could argue that shorter PRs are easier to review and therefore people are doing more review, and that should result in better changes (indeed the 15% fewer reverts supports this).
Or you could argue that shorter PRs are harder to review as they have less context, and therefore more time is taken but that a better output is not necessarily being reached.
I think both of these are entirely plausible, and the truth probably depends on other process factors.
If you split 200 lines of code into 4 x 50 you are actually going to have 0.85 x 4 reverts or 3.4x reverts with this strategy.
But overall the whole data is terrible and infuriating because likely the changes are very different in the dataset depending on the loc.
Unless the metric is actually 40% faster per LOC but that's not specified.
It cheers me up that people so naive exist.
Size is contextual to the thing the PR regards. Maybe it's one char. Maybe it's 500 lines. Maybe it's 10k lines. Screw this.
It was tested in smaller chunks in a feature branch, then the larger feature was merged. There was some testing done on the huge feature branch, but not everything that was present in the smaller PRs.
In our case, this new "feature" was a complete overhaul of our business logic engine. Over time, we had essentially developed 3 or 4 different API versions for various business tasks, with a lot of logic being duplicated between them. We want to bring it all under one umbrella, and after about 8 years, we think that we have a good idea of what sort of abstraction can work for the next 8 years.
Now that the engine is merged, we need to start the grueling task of actually moving logic from the old APIs into it and removing them from our codebase. We spent about 6 months (between 2 devs) writing the engine and I suspect it will take 2 years to move all of the logic into the new engine.
Idk where I was going with this comment anymore, but needless to say, I don't often see 50 line PRs at work!
There's too many facets to what makes a PR good that boiling it down to line count is silly.
Ex. Small code changes are required if the solution requires it - forcing 5 lines change to 50 lines would be considered wildly bad idea. On the other hand a 500 line PR to solve one problem suddenly broken down to 10 PRs makes the review far harder depending on the context.
This worked reasonably well in terms of forcing junior engineers to reason harder about what an atomic change should look like, but also impacted senior folk that were making atomic changes that encompassed broad areas of the codebase. We eventually reverted the change and relied on implementing CODEOWNERs to enforce senior devs to request changes when PRs got out of hand.
There's definitely nuance to enforcement of best practices across an entire organization; haven't really seen it done well although I've primarily worked at startups.
Changes were quite simple. 600 common lines are about rearranging big module into multiple smaller modules. 300 removed lines are technically could be in a separate PR, but it's some cleanup aftermath related to rearrangement.
PRs that are <50 lines are more often than not documentation, typos and small fixes in my experience.
Most real value add PRs end up bigger due to tests, documentation changes, and the actual change.
This for me is the typical claim that'll become an example of Goodhart's law when implemented. Random people that are not engineers will use this to claim "your PR should be 50 lines long" as some sort of absolute truth, when in reality the number of lines of code is absolutely irrelevant most of the time.
Context and details matter. Sometimes multiple small PRs are better. Sometimes a larger PR is better. It always depends. These absolutist claims that "this is better" is what leads to atrocious management practices that always end up getting summarized to a number, losing all of its meaning in the process.
Had we not we moved on from the lines of code metrics already decades ago? :facepalm:
For a large PR, I usually regroup my changes into smaller unit commits which can be reviewed commit by commit.
For an even larger (or more complex) PR, I'll make a set of stacked PRs which can be reviewed in pieces, each with a good description of what/why/how stuff is happening in that piece. Once all PRs have been approved, the lot of them can be deployed together.
I've at times had 3 PRs for a total of 3 line changes, because they were for a single table's schema migration picking new indexes and each had many factors to consider. The PR descriptions were of course much longer.
Not saying the claim is necessarily wrong, but it's also exactly what I'd expect given the source.
Showing some context per commit makes all the difference when I decide to do a first commit which fixes all formatting in the file that I’m visiting (for the real/substantive change).
Anyways, I don't so much care about the size of a PR as much as I care about how understandable it is. That generally ends up being a few rules of thumb like:
1. If you change a common function name, change it everywhere but please don't do anything else in the same commit.
2. If you need to make a change in code you are depending on to accomplish a goal in your own code, try to break the change to the dependency into another commit. There is a little bit of an art here to both understand how to break things up and to not break them up too small.
3. Try to make your history meaningful rather than just having a bunch of random commits like "fix", "forgotten file", etc.
At first I faced a lot of pushback from team members when I wanted them to break up the commits. So I caved. I just tried to give the feedback that I could give, and approved the PRs when they were “good enough, I guess”. They kept breaking in production and it would take a week to fix them. It was hard to tell what, in the commits, was broken. They were just so large.
And it’s not like these PRs were getting written quickly. People would spend two weeks or four weeks working on one PR, because they wanted to make one massive change that did everything and closed out a ticket.
At this point, I’ve made inroads and the team is sending out much smaller PRs, much more frequently. The PRs are getting reviewed, merged, and tested within an hour, and one developer can submit three or four PRs if they’re being productive that day.
I think the problem is that the cognitive load of making large changes is superlinear. A 2x larger change is not 2x as hard, but maybe 2.5x or 3x as hard. With large PRs, you spend way too much time just trying to understand what you are doing, as an author, or what you are looking at, as a reviewer.
1. The "unit of work" is small enough to decompose into a 50 line PR. This is possible on larger codebases and less verbose languages. It can enforce over-DRY code which often leads to a lower universal ability to understand what is going on.
2. The team responsible for reviewing is responsive. I have never worked at a company where PR reviews happened quickly. Everyone is busy, everyone is overworked, and often times PRs are a chore that can be put off until later. Of course you can make some labyrinthine chain of small PRs all merging into one big PR but the problem still exists. I'd personally rather review a 300 line complete PR written well than 6 PRs that I have to constantly recall context on.
This advice follows the same logic as the asinine cyclomatic complexity rules. Yes, in theory they are sound. In practice, it's more of a flexible benchmark rather than a hard rule. But since someone inevitably writes YASL (yet another stupid linter) that enforces these types of things, an enterprising work-focused engineer will end up spending either more time fixing code to please the linter or more time exploiting loopholes in the rules.
Just write Good Code (TM) - whatever that looks like.
Once PRs start to exceed 10k lines of code, they seem to become slightly “safer.”
I suspect this is because the extreme end of PR sizes includes refactors, which
maybe start including less functionality change and therefore have a slightly
lower chance of breaking. Alternatively, engineers may become progressively more
reluctant to revert PRs after 10k lines because of emotional anchoring and merge
conflicts.The same goes for functions
Please stop listening to this garbage advice.
> The highest volume coders and repositories have a median change size of only 40-80 lines.
This claim is rational under the Facebook axioms, so to speak.
> We flew down weekly to meet with IBM, but they thought the way to measure software was the amount of code we wrote, when really the better the software, the fewer lines of code.” — Bill Gates
Skimming a PR is pure cognitive dissonance, dont do this and complain when your app turns into a dumpster fire.
Who wants to spend hours bothering with hardline process - you’re the programmer, you’re going to be the guy answering any questions on your pull; therefore you’re going to have to look at it yourself.