It should be hard to make breaking changes in common code. Even 'trivial' breaking changes seem to have a way of breaking things even when they shouldn't. If you need to make a breaking change to common code, the proper way to do it is add the new functionality separately, deprecate the old functionality (i.e. with javadoc so it gets called out explicitly by the IDE), and incorporate it 1-by-1 into consumers until none are using the deprecated version anymore.
You should apply care when making breaking changes. Having it be hard is a separate issue - I'd say distractions from multi repo tooling would introduce more risks overall. Having a unified CI system in a monorepo is really nice.
I haven't lost anything, I've gained the ability to make breaking changes because I don't have to update everything that breaks all at once. I don't have to do it at all because that's the job of the team responsible.
With a monorepo what happens when their are 17 projects using the common code and I'm not familiar with 16 of them? Do I have to dive into the code of all 16 and fix them?
That is one viable workflow: Make a change to the common code and publish it as a new package version while allowing all existing code to continue to use the old package. Then, migrate other projects to the newer version of the dependency one by one.
Allowing multiple versions of the same code to exist in production at once adds complexity. It's a trade-off.
Also, if you're doing this with code that is ultimately webpacked to run in a web browser and you don't pay attention to the full tree of dependencies you're working with, there's a chance you end up loading two versions of the same library into a single web page, increasing the page weight and possibly causing incompatibilities in event handling.
Google prefers to simply have one master version of the entire company at a time.
I've spent a lot of time wondering which solution is the best and I'm still not sure.
You probably should have a way to visualize bundle size increases in PRs easily, so that this becomes obvious. Alternatively, some package managers like Yarn let you flatten the dependency tree, forcing you to pick one version of everything. Even with a monorepo, since you'll likely be using 3rd party dependencies, it's always an interesting exercise because of how hard NPM makes this: getting to a point where you only have 1 version of every 3rd party package can be very, very hard as some combinations of libs are mutually exclusive.
I don't think there's a universal answer to your question.
The idea that clients can run on the old library forever is a nightmare, especially for security-relevant changes. When I see a binary built yesterday I want it to contain yesterday’s code, not a copy of libpng 0.1 from 1996.
You can send your pull request to the affected team leads, and request that they approve it, once they make changes on their end.
I mean, the alternative is that you have 17 different projects, each using one of five different versions of the common code. Heaven forbid one of them makes an incorrect assumption about another. Getting 17 different teams to dance together around a breaking change is always going to be hard.
This is an issue that needs to be managed, from the systems I've seen it tends to be managed poorly, that's in both monoish repos and multi-repo setups as well as everyone using third party packages. I don't think committing everything to trunk is a good way to resolve it though, they only upside to this approach is that it might force you to resolve it.
What I have to deal with much more frequently is the opposite problem, we have an urgent update that will break several things but has to be deployed for one dependent binary ASAP and fixing the rest of the universe first is not an option.
Worst case it might create some security issues, something that should be a breaking change getting kludged into a new breaking change but still being broken.