I'm not sure if I understand your argument.
> Branch test coverage is different from line coverage and in my opinion it should be the only metric used in this context for coverage.
It's not different, just more thorough. Line and statement coverage are still useful metrics to track. They might not tell you whether you're testing all code paths, but they still tell you that you're at least testing some of them.
Very few projects take testing seriously to also track branch coverage, and even fewer go the extra mile of reaching 100% in that metric. SQLite is the only such project that does, AFAIK.
> 90-95% line coverage is exactly why many unit tests are garbage
Hard disagree. Line coverage is still a useful metric, and the only "garbage" unit tests are those that don't test the right thing. After all, you can technically cover a block of code, with the test not making the correct assertions. Or the test could make the right assertions, but it doesn't actually reproduce a scenario correctly. Etc. Coverage only tracks whether the SUT was executed, not if the test is correct or useful. That's the job of reviewers to point out.
> and why many come up with the argument "I prefer integration tests, unit tests are not that useful".
No. Programmers who say that either haven't worked on teams with a strong testing mindset, haven't worked on codebases with high quality unit tests, or are just being lazy. In any case, taking advice from such programmers about testing practices would not be wise.