The idea of post-merge code review is horrifying to me. I guess this is the "move fast and break things" attitude in action.
Deploy now, catch bugs...sometime, maybe (TM)!
We have a modified GitFlow: main: is the source of truth and is what is in production. develop: is the constantly moving branch we make PRs against. You can commit directly here which is discouraged but it isn't a hard rule. ticket: is a branch for each JIRA ticket not each feature. release: We don't make these and just use tags on main. Each "release" is a merge from develop to main and that gets deployed. hot fix: These are made against main and merged back to develop when they are used. It is rare enough I have to look up our "official" procedure.
With that we can easily use PRs, release code in small hidden chunks, do code reviews, etc.
Seems like the big win they got was releasing small hidden chunks of a feature and deploying it to staging. They also gave up some nice things as well like code review before merging.
This kind of thing really grosses me out. Why can't you just include the issue number in relevant commit messages? Why does it matter what your branches are named?
Naming branches relevant to what they actually represent is incredibly important to me, personally. I don't care what you do but I refuse to play by this rule in particular, when it's a hard rule.
If you're a huge team with a slow release process then I guess you need that develop/master split, but it's costly. When I've worked in a small team we've had a single master branch and every feature branch gets released and deployed immediately after merge (with a "lock" so that you don't merge your feature until the previous person has signed off their deploy), with each feature branch ideally representing a user-visible agile feature (i.e. up to 2 weeks' work) - IMO you don't gain a lot by merging something that doesn't have a user-facing deliverable (how can you be sure the code you're merging is right or not?).
The main branch is always stable, releasable. Feature branches are branches off main, which are then merged back into main. No develop or release branches.
It is like trunk based development, especially if branches are kept short and regularly rebased onto origin/main, but with a point to run PR checks before merging.
Given the state of literally all software I use, this seems to be the default behaviour.
If you want to stretch it a little more, you could selectively do post-merge reviews for things that might be low risk (ie a UI change that's behind a feature flag that only your team sees), and keep riskier changes (like a big refactor, a data migration, etc) on the pre-merge review flow.
At least having another pair of eyes check text-only changes has caught so many typos, etc, and only takes a couple minutes.
1. Put all new code behind feature flags that are off by default in production.
2. Make rolling back easy.
3. Have extensive unit and integration tests.
Some of the deployment steps could be automated even further -- maybe the CI server automatically deploys Staging after a successful build.See the books Accelerate: The Science of Lean Software and DevOps, and The Toyota Way for more.
How is that even possible? A comma change in a feature can break things, are you going to put that change behind a feature flag?
"New code" isn't just "new files/functions", so it's not always feasible to keep it behind flags, unless you use a "copy on write" methodology to all code.
I think the problem people have with trunk dev is they don't grok that some projects don't have the same deployment strategy as them. There is a thing called a code freeze. This is a common practice. Not everyone does it.
Just because you do trunk dev does not mean you can't also have a feature branch to try stuff out or a release branch or any other number of branches. What trunk dev means is get your ode out there to other devs quick. Not necessarily get your code out to production or to QA or the customer quick. Those decisions can be independent of branch strategy.
(Confession: At least I "think* this is an example of getting them confused. They are different things, right...? The same difference you mentioned?)
What is the cost of a bug getting into the wild vs what is the cost of keeping the bug rate very low?
If you're NASA or a high frequency trading company I'm guessing the cost of bugs can be very high. If you're making internal tools to automate admin tasks the cost of bugs is often very low.
I don’t think it's fair to pick something extra-ordinary to respond to the article. Obviously "no worries" doesn't apply if the stakes are high.
I've never used them but the thought of feature branches seems absurd versus simply merging small changes. Very rarely can a "feature" not be broken down into small self-contained changes.
As soon as your test suite grows to the point that users aren't likely to run ALL tests before pushing new changes, you can't.
Also, if you want to have code reviews at all, you want them pre-merge. If you have irreversible changes (For example: you have code that writes a serialized format and once you review the code that changes it, people have already used the code, even if only in testing/staging - but you now have data serialized in the bad wire format that you may need to be able to correct, and the correction code after review will need to be maintained forever unless you accept the loss of the data).
I think: commit directly to master to scaffold and iterate quickly on a greenfield project with 1-3 devs. Then start doing feature branches and PR's to master.
If you want CD from some particular branch it would be silly to suggest the active dev branch.
Trunk dev is about how and when new code is merged.
It says nothing about when it is released.
It's incorrect to think that any people doing trunk dev deploy from trunk willy nilly.
Why?
I have run trunk for years, naturally you don't tag and deploy to prod if tests fail. But the world keeps turning if trunk has a test fail. Or a build breakage. Or a code style breakage. You just fix it. On trunk, everyone can see it and anyone can fix it.
Big fan on commit early.
The world stops for the other developers that were unfortunate enough to pull/rebase their work when there was a build breakage. Having people shout “don’t pull, master is broken” is more overhead than is lost on PRs.
A test fail isn’t as serious as no one is completely blocked, but a CI build that has 5 test failures can easily get another 5 before 3 of the first 5 are fixed. Before the end of day 1 you have a dozen failing tests of 3-4 different ages, some of which aren’t attributed to those who broke them. Finding who should stop what they are doing and fix the test is a job that needs to be done.
I guess some of this advice applies better to repos where a large number of people are working on it.
I couldn't imagine giving up the quality gate factor of PRs. Carving out the time to dissect changes catches so many bugs (although it can be received harshly sometimes compared to face to face).
Also pushing to master vs. long lived feature braches is a false dichotomy. You can have small PRs on short-lived branches that may not be a complete feature but can be merged without making the main branch unreleasable.
There is also the political factor to consider in companies where product and sales people control the selected work items. Once something is in a working state there is pressure to move on to the next thing. Fighting for quality before it is in a publishable state is a devs best defence against later rework.
The CD community is overly obsessed with velocity. Of course removing obstacles can lead to a smoother faster workflow. Take it to the extreme and it becomes a dopamine hit activity, the goal is to merge changes fast and we become unable to take the time to think deeply and reflect since it is clear that we are valued by our rate of commits over smart decisions.
This insight is true and depressing. It's not the best way to focus effort. You fight for quality on a new feature that may or may not get use. By the time you know how well received the feature is it's too late to allocate resources to improving the code quality. So if you don't get it right the first time, this code might cost you months of wasted time when trying to make changes to it in the future.
It would be far better if time was allocated to going back and figuring out what parts of the code are causing problems and going back and fixing them. Spending time removing the features that don't get used.
How do you get the political buy in to do this? I have no idea.
She thinks modern society is too obsessed with doing the work that we skip over the step of finish things properly. The result is that we don't emotionally or practically close the loop on our work. Everything is left in a "oh maybe I'll come back to that" stage.
Finishing something should involve a moment of reflection where we notice and accept that we're no longer its steward. Its both a time for celebration (Yay! We did it!) and, often it involves a bit of morning. ("Oh, that period in my life is over forever now. Huh.").
In the circles she moves in people think skipping over that step of closure is what causes burnout. For a dozen reasons we're just too keen to start the next thing, so we don't appreciate the work we've done. We don't celebrate. We don't move on. We don't clean up our code, even when we know we should. We end up feeling like we're juggling a dozen balls, because we're not really putting any of them down.
I don't know the solution to this in the workplace. But for my own work, I'm trying to find stopping points where I can take half a day off, go out for dinner and reflect on the passing of what I've accomplished. It feels really wholesome.
I think CD is about minimising the amount of code released in one go, which allows you to catch issues much faster and revert issues much quicker. Compare that to something most banks do, release once a quarter, and you'll get stuff like that UK bank that went down for days (can't remember which one it was).
I've yet to meet anyone saying you have to finish your features faster.
The tradeoff is that if you are pushing your code worldwide in an hour and you shipped a critical bug, your high velocity also creates outsized negative impact.
The points made by the author are confusing to me.
Quality Assurance was under-resourced. They had a huge job of checking and re-checking every feature to verify that there were no regressions. After merging a feature into develop, they had to check again to see if there were any new issues that were introduced by bad merges or conflicting feature requirements.
If this was the case and they were fine with QA testing just the `master` branch after moving to TBD, maybe QA shouldn't have been testing their feature branches in the original workflow. Just use branches for proper code review and then QA only steps in after the branch is merged? The threshold of conflict was amplified by the time that passed between when a branch was cut from develop to the time when it was merged back.
For bigger features, a branch's life could last one or even two weeks. The more time that passed, the greater divergence there would be from the other code.
Feature branches should be short-lived, as atomic as possible. And if you're working on a big feature, you have to update your branch frequently with upstream changes. Merges of Doom only happen if you're not following version control best practices.This also requires a little bit of planning upfront (especially if you're working in parallel on a single feature), but forcing that thought is a good thing.
It also seems like they attributed moving to Kanban as only being possible due to the move to TBD, but it's not like it's impossible with a proper branching workflow.
So, the author made the switch to TBD and attributed it to increased velocity and better _overall morale_, but I think they're just enjoying the seemingly greener grass across the fence for a while.
On what do you base this assertion? I would counter that people are switching because they’ve experienced significant difficulties and that your claims they “don’t understand” version control are unfounded.
You didn’t actually explain how these developers were misusing their version control. You just asserted that they are ignorant because they hold a different opinion.
> And if you're working on a big feature, you have to update your branch frequently with upstream changes. Merges of Doom only happen if you're not following version control best practices.
The problem with this glib statement is that it ignores the cost of these frequent merges. If you’re working on a small team, or your feature branch is only changing code that no one else is changing in the trunk, sure. Merge constantly. It’s probably pretty easy. If you’re working on a large team and others are making changes to the same code in the trunk, this rapidly becomes a massive tax. Merges become more and more difficult as others continue iterating on changes in the trunk unaware of the burden they are placing on you to manually merge on top of your conflicting changes.
I have never seen a long lived feature branch merged successfully back into the trunk without major friction. The typical resolution involves locking everyone else out of the trunk to get the merge in (or doing it over the weekend, for the same effect), and turning off half the tests because they all broke in the giant clusterfuck merge.
The validation risk to long lived feature branches is very high. Tests begin to break as you make deep changes in the feature branch. For simple tests, they might be easy to fix. For more complex tests, you might need the test owner to assist, but they don’t want to, because it’s not their problem. You broke it. Except of course you didn’t. The weeks of conflicting changes broke it.
The cost of divergence is high and grows rapidly with age of the branch. Developers begin cutting corners and plan to “deal with it later” as the merge tax rises, because dealing with the merges slows down the work, making the feature branch even longer lived.
For the merge pain part of your argument, I would tell you to have a look at any successful open source project. Does it use TBD? Does it still support a big community of developers, mostly working asynchronously and shipping working software?
Now, if you're working on an internal project, please add in a bit of planning (quick note: a developer's job should not include only writing code), you will be able to ship high quality code without it slowing you down.
I'm irked by TBD because is a shortcut. It trades off deliberate collaboration loss for velocity gain – if that is the tradeoff your team wants to make, maybe you should consider it. But I do not get why experienced engineering leaders suggest using it for all teams.
I really wish he didn’t overload the term "Continuous Integration” to also mean "workflow without feature branches". It will surely cause a lot of confusion to those who aren’t fully down with the concepts already.
I can already foresee a small startup where the CTO-by-confidence/coincidence and one of the "senior" devs are having extremely heated circular arguments about the pros and cons of CI, not even talking about the same thing.
OPs "trunk-based development" seems like a more suitable term for what they’re describing.
It's true that as an industry we've overloaded the term Continuous Integration to mean build servers running automated tests (I just did so in a comment in this thread!), but that's where the overload is.
1) Consider main branch deployable any time, so don’t push changes that can’t be deployed
2) You can commit to main/master if it’s a reasonably small change not needing review
3) PRs uses for more complex changes, easier to review
4) Deploys off main branch
5) Tests run on all branches
Works well 99% of the time, can fall down when a large chain is queued for deploy (just merged) and someone wants to push a minor change but now it’s got to be everything (unless you revert temporarily).
I earnestly do not believe there exists any change small enough to not need a review.
A minimum of two sets of eyeballs on every change. CI cannot detect intentional backdoors being introduced.
I’ve both used strategies on many different projects. Regardless of the development strategy I’ve seen nasty merge parties. The way to avoid those merges is to reduce your batch size and keep your un-integrated changes to a narrow scope.
If you need to make changes out side of that scope. Stash your work; create a new branch; make the change; let your teammates know what you did; before going back to your other branch. You can then integrate that fix back to your local copy. But the important thing is that your team mates can also sync that one off change to their local copy too.
The worse thing is when two developers find the same bug and fix it simultaneously in different commits. Trunk-based and GitFlow both have this problem. Stick to the scope of work that was coordinated in your standup meeting for the day and let your coworkers know if you need to go outside of that scope. Be conscientious.
(Complete aside: Try to do trunk based development in a Perforce code base and you will learn a lot about reducing your the batch size of your commits and communicating the scope of code changes. Perforce requires you to be team oriented when developing)
- The quality of these reviews is better than any code review I’ve seen on feature branch reviews
- We review the whole collection of changes going out, on rare occasions when two changes conflict, you catch these
- Everyone keeps up to date with what is changing in the code base and why
- If for whatever reason there’s an issue after the deployment, it’s easier to fix because everyone has fresh in their head what has changed in this release
Even suggesting that waiting days or even weeks to put something in production makes you feel like a caveman in an age when we are benchmarked by the time from edit to production (or heaven forbid, by the number of merges to production per day).
When I get a dump on my screen and have to communicate asynchronously and with in-line emojis (sigh) that leads to protectiveness. When we sit as a group it is much easier to get a common understanding of why and how we do things (it's not personal that you put your semicolons so differently from us). There's also at least the feeling that we have the possibility to learn from one another.
I still think it is a superior way of doing review. Simultaneously, as a group. Of all the things we do, review is what gains most by ultra bandwidth in-person communication.
I'm very curious. Curious enough to even try it.
Also, I’ve done this with smaller teams, 5-6 people. If you had three or four times as many that might make for a long meeting.
Also, I think these help replace what might have been other meetings as well. Sometimes product people like to sit in and listen so they know exactly what's being released--I don't think they'd ever participate or get much value from a normal code review.
Edit: I don't know about post merge code reviews, that seems like a risky idea to me at scale
1. For solo devs/founders, push directly to master. Iterate quickly to serve customers, focus on growth and being "not-dead" by default.[0]
2. The moment you have a 2nd dev working (most likely because you have some sense of product-market fit and some revenue growth), then create feature PRs off of master. Review apps on each feature branch (Heroku supports this easily).
3. 3-5 devs: Have a "develop" branch and PRs go into "develop". "develop" deploys to a staging app, which is tested, and if all is well, "develop" can be merged into master which deploys to prod.
4. > 5 devs: Then you can use the full Gitflow model, with develop/releases/master splits in your branches.
I find that the above works well to find the nice balance between productivity and risk-management. This also works nicely whether you are a consultant/services company working project by project with a client, or whether you're building a product startup. Doing the full Gitflow model as a solo dev is unproductive, and committing to master with a larger team is asking for disaster, especially if your app is critical to your customer business needs.
This keeps merge conflicts low and keeps screwing up master to a minimum.
Of course everything is context dependent (team size, codebase size, testing maturity, frequency of release), but I guess we found GitFlow seemed to be better aligned with modelling the "versioned software" approach - i.e. with releases determined by features, everybody knows/cares about what is in version 2.1 etc, you might have multiple releases all on the go at the same time with development happening for 2.1.1 bug fixes, 2.2 minor features, and maybe master has moved on to stuff that will ship in 3.x etc.
Definitely with less frequent releases (App Store Review taking 1+ weeks and being unpredictable probably had greater influence on release frequency) and a large team, we ended up with releases feeling quite painful - there was a lot of pressure to land features close to the deadline because the next release might not be for a while, meaning there was more churn on release branches as people tried to stabilise things that were borderline "ready".
Git Flow's release branching model added overhead - people might forget to merge back to develop, deal with conflicts or semantic brokenness if you had different targeted fixes on the release branch to mainline development (e.g. let's just disable this functionality for now on release branch and push it to the next version, fix forward on develop, merge release branch back to develop -> now it's disabled there too, have to unwind).
We switched many years ago (probably after reading https://barro.github.io/2016/02/a-succesful-git-branching-mo... ) to a time-based continuous release model - periodically cut a release from master (there are lots of teams shipping mobile apps on a monthly, 2 weekly or even weekly cadence these days), with trunk-based development and cactus branching for releases. If we need to hotfix just need to go to the most recent release tag/branch and ship a fix based off that point, no merging back between branches. If you need to fix a bug in the release, it's on you to make sure an appropriate fix is applied in both places, through e.g. cherry-picking, no need to worry about merging anything back anywhere.
Together with some other actions (cultural focus on reducing post-branch churn and investment in testing capability to gain confidence in release candidates faster, set in the context of a goal to increase release cadence; increased usage of feature flags; etc), we ended up significantly improving release frequency & predictability, which reduced the time for stakeholders to get changes shipped and visible to users from when they were "done" (especially valuable for small changes, of course).
Nobody ever really understood the utility of the "production" branch from GitFlow.
tl;dr GitFlow seemed to add overhead with little value; we switched to trunk based development and didn't look back.
Thanks for your sharing. Just out of curiosity, how do you tag the release that is in production? Is it automatically? I don't see how you can automate this step when you merge everything only to master.
The process we follow is actually very similar but having the master/develop branches we can automate things more easily, like if a commit is applied to master we know it needs to be tagged as it should come from a release branch, if a commit lands on develop we create an alpha release, if a commit lands to a rc branch a production release is submitted to testflight/playstore beta track for final tests before publishing. We automate all this with a jenkins pipeline. I am curious to see how this is done in a trunk based branching model. Care to share?
When a release is made you should merge those changes into other branches too. Then merge-conflicts or behavioural changes are found, and fixed, earlier.
YMMV
Instead I think we need module-ownership and tools supporting that. At any given time every module should be assigned to an owner-programmer or small owner-team. Only they can change their modules. Others can request changes, or create their own copy of that module to modify, but not modify code owned by someone else willy-nilly.
If programmers cannot modify modules owned by others there will be no merge-conflicts, right?
I wonder why this kind of code-ownership approach isn't more widely practiced and why there doesn't seem to be much tool-support for it?
Ever ClearCase install I know of is an awful legacy system superseded in practice by git, even in environments that work on the need-to-know basis.
Lawyers and paralegal just ditched version control altogether, surviving on Office365 document sharing and distributed editing.
Super-user can grant and take away ownership to any module. Super-user should have a deputy or two as well.
Here's the metaphor: In New York City and all cities you have traffic lights and there are parking rules, and pedestrian crossings and bike-lanes and some streets are one-directional. What would happen if all the rules and traffic-lights were taken out? You could still get from A to B, but probably on average much slower because you would get stuck in traffic-jams much more often.
Perhaps counter-intuitively creating rules which restrict how you can drive and park your car do not make you move slower, they make traffic more efficient. Traffic jam is like a merge-conflict. Two cars merging on to the same narrow street from opposite directions. One of them has to back out. And then so do all cars behind it. Not fun.
Git etc. are a bit like city without traffic lights, one-directional streets and parking restrictions. Anybody can do whatever they want, branch and branch and merge and resolve conflicts. It gives you the impression of great flexibility and freedom, but so would a city without traffic rules. Yet all cities have realized they need to restrict what people do on their streets, to eliminate "traffic-merge-conflicts".
Now naturally you can say that your project has rules in place as to who can modify what code and when (do you?). But I think there should be tool-support for that and persistent module-ownership instead of "module communism" where everybody owns everything. When everybody owns everything no-one is responsible.
I'm not a big rules-guy but I think traffic lights do more good than bad.
The staging/prod candidate is built from master/main branch 21 days old, plus potentially some (few) cherrypicks.
This allows to fearlessly commit to master, and then the “T-21” has 21 days of completely predictable future, that can be changed by doing cherrypicks.
So we can run intense testing on “T-0” aka main, find any issues and add them to be cherry-picked into the T-21.
The bonus is that when the fixes “arrive” to T-21, the cherry-picks become void and stop being applied.
Thus, absent the bugs, the staging/prod code automatically converges to main/master over time.
And yes, we do the reviews of the commits that go into master - but from the T-21 point of view they are 21 days in the future! So there is ample time for any reaction.
Would folks be interested in a more detailed write up ?
Branch feature from master/main.
Keep feature branch in sync with master/main (merging from master/main everyday). This minimizes conflicts when we finally get to merge feature branch to master.
Any refactor made in feature branch may be cherrypicked to master any time. This reduces differences between feature branch and master/main, resulting in less code to review.
1. Do all your development and testing on one branch (develop). Vet its HEAD commit.
2. Once that's good to go, merge that into a different branch (master), and deploy a completely different commit to production!
The vast majority of devs that I have spoken to believe that, after merging develop with master that develop == master, and have no controls in place to actually guarantee that.
& in case you're thinking "I thought they were?"
* merge to master (master) # what shipped
|\
| * PR #184 (develop) # what was used by devs
Different commits == potentially different trees. (And, in a large enough company with enough commits and time, "potentially" drifts towards certainty.)A good shop will deploy master somewhere sane like a QA env first… but still. It's brain-dead. Truck-based development is simpler, less often crashes the minds of devs who can't be arsed to learn git, and what you test/dev == what you deploy.
Under normal conditions this should be trivially true. Develop forked off from main, no commits on main, therefore the merge back into main has no changes on the first-parent side and is equal to the second parent. The two ways to screw this up are:
1. You did a hotfix release and didn't merge the hotfix back into develop properly, or
2. You made changes on a release branch and forgot to merge that back into develop.
In both cases, the fact that the results of merging develop into main produced a different tree than what was in develop (or rather, in the current release branch) should be a signal that you screwed up somewhere.
Having said all that, I'm willing to bet that 98% of places that do Git Flow don't actually have any checks to ensure that the tree on main is identical to the tree on the release branch. And this is because the model is actually rather complex, most people don't understand Git properly, and the accessible documentation about Git Flow doesn't even bother to mention the possibility that the merge into main could produce a different tree.
Trunk-based, that is.
For a moment I confused the 'truck' proposition with the 'bus factor' reasoning [1], which of course, is from a different context.
I am highly doubtful of the “commit first, test and code review later” model, though. Maybe this works for a very small, tight team who are all very highly skilled and care a ton about engineering quality. But this model can fall apart quickly. Bad code blocks the whole team and everyone pays the cost. You eventually end up with someone babysitting the build and test process to keep it moving and then some bright mind asks why you don’t put this stuff before checkin.
Certainly I think there are times when having multiple people pushing straight to a feature branch is good but I’d worry about a codebase where features touch so much of the same stuff that merges are so painful. But maybe OP’s environment is just alien to me and there are reasons for both of these things.
I'm a big fan of checking in small amounts of meaningful code and utilizing feature flags when necessary. Like others have mentioned tight feedback loops are key. I have never felt like I needed to ditch branches though. It honestly sounds like a nightmare to commit directly to main in a team setting. On the other hand, maybe it would force developers to think twice about what they're changing.
If you reading this do this, I'd be interested in hearing what the benefit of pushing to master is for you. Is it purely for velocity? And if so why is velocity that important?
I wonder how the author's code base will develop using a weekly reflection on the code committed instead of PRs. I'm not sure how well their process will handle someone's poor design decisions. My experience has been that poor decisions tend to stay in code once they are there, and especially as more things become dependent on those decisions. Not that reviews will catch everything, but that second set of eyes can go a long way.
After causing a few outages and issues that you have to hear about, one learns fairly quickly not to write code that needs endless PR reviews.
CI/CD is the best and most impactful thing in years and optimizing for it is the best you can do IMO. Anything that gets in the way of CI/CD you want to avoid. Anything that helps with CI/CD you want more of it.
Using that as a guide post, trunk-based development is better than feature branches. Kanban is better than Scrum. Monoliths vs Microservices is less clear cut, depends on how costly the monolith is to build vs how annoying all the services are to deploy.
That might be unusual, but even GitFlow in that case is very poor handling those kind of deliveries.
My process over the last 7 years working in game dev has slowly evolved to this where I'll ask a contractor to implement a feature on the main branch but make sure that it is completely toggleable via a #define NEW_FEATURE
I thought I was pretty clever until I read an article on HN a few years ago that this is exactly what Google does lol
The hairy part is when the new feature or change cuts across different parts of the code and then, given enough time, those 'ifs' pollute the codebase. Solution here is to be mindful of how the system grows, keeping things isolated and (perhaps counter-intuitive) favor duplication over shared logic/libraries. Regardless if it's about one big thing (i.e. monolith) or multiple small things (different projects even).
I'm not saying it's necessarily a bad idea. Just that it isn't a replacement for abstraction. If you have more than a couple of #ifdef blocks per feature, you might be setting yourself up for future pain.
all developers develop features, we then cherry pick what features we want in the next release, do a release branch, merge in features, test (and if necessary, fix) then merge into main.
Any answer you could possibly get from this question will eventually boil down to project repos vs monorepo. There are pros and cons to each, which are more or less meaningless depending on the amount of developers working in parallel.