Suppose the binaries are build byproducts, and people just check this stuff in, like, whatever. Well, if somebody needs to sign off on the output, that's a problem - so that person then doesn't use what's in the repo, but instead builds the output from scratch, from the source code, hopefully with known build tools (see above!), and signs off on whatever comes out.
But, day to day, for your average build, which is going to be run on your own PC and nowhere else, nobody need sign off on anything. If you link with some random object file that was built on a colleague's machine, say, then that's probably absolutely fine - and even if it isn't, it's still probably fine enough to be getting on with for now. If you work for the sort of company that's worried about this stuff, there's a QA department, so any issues arising are not going to get very far.
Overall, this stuff sorts itself out over time. Things that are problems end up having procedures introduced to ensure that they stop happening. And things that are non-problems just... continue to happen.
For simple things, if the code in a directory changes then the CI system does a rebuild of that directory. You can have the CI system either validate that the binary matches or commit the binary itself. More complicated things you'll have a build system such as Bazel which figures out what changed.
Depends if you want to wait for the CI system to upload or not. Also if you want CI to have commit permissions.
>Never mind that having a bit-for-bit reproducible build is incredibly difficult.
Debian is at something like 90% reproducible packages once they fix two outstanding things. Most languages will have settings and best practices at this point that will give reproducible builds.
>Anyway, such simple cases where a whole app lives under a single directory are vanishingly rare.
Then use Bezel once you get past that stage.
Look, to be blunt, it seems like you're trying to nitpick whatever anyone says while ignoring large parts of answers. Fact is, many people at small and large companies use monorepos successfully. They work for those people, you can keep trying to argue they don't or try to learn why they do.
This doesn't work well for dependencies where you're expected to be using the latest version of something that changes 10 times a day.
The rest of your questions are fairly irrelevant as they would be answered the same way as the in the dependency repo case. ie, use official binaries.
...but this is closer to multi-repo than monorepo. If you're in a monorepo you might as well use the source.
By the CI. All major CI/CD tools support rules like build binary x whenever a file under x-src/* changes; commit binary x when the ref matches /v[0-9.]+/; don't allow developers to manually push to these refs / paths; (run a script to) bump the dependent x of y whenever binary x changes; merge the bumped version if all tests still pass; etc.
A compiler is just a program that takes some input and create some output. Both the compiler and the input can have a cryptographically secure hash. Putting both in a sealed box, like a docker image, with its own hash, gives you a program that takes no input and produces some output.
If the box changes, run it in a trusted machine and save the output together with a signed declaration of which box version produced it
(See also: trusting trust)
Edit: I think we’re getting there though, with all the efforts going on with containers, webassembly blockchains, ipfs and so forth it’s getting closer