The core problem is that you're changing the semantics of that integer as you change types, and if that happens automatically then the compiler can't protect you from typos, vibe-coded defects, or any of the other ways kids are generating almost-correct code nowadays. You can mitigate that with other coding patterns (like requiring type parameters in any potentially unsafe arithmetic helper functions and banning builtins which aren't wrapped that way), but under the swiss cheese model of error handling it still massively increases your risky surface area.
The issue is more obvious on the input side of that expression and with a different mask. E.g.:
const a: u64 = 42314;
const even_mask: u4 = 0b0101;
a & even_mask;
Should `a` be lowered to a u4 for the computation, or `even_mask` promoted, or however we handle the internals have the result lowered sometimes to a u4? Arguably not. The mask is designed to extract even bit indices, but we're definitely going to only extract the low bits. The only safe instance of implicit conversion in this pattern is when you intend to only extract the low bits for some purpose.What if `even_mask` is instead a comptime_int? You still have the same issue. That was a poor use of comptime ints since now that implicit conversion will always happen, and you lost your compiler errors when you misuse that constant.
Back to your proposal of something that should always be safe: implicitly lowering `a & 15` to a u4. The danger is in using it outside its intended context, and given that we're working with primitive integers you'll likely have a lot of functions floating around capable of handling the result incorrectly, so you really want to at least use the _right_ integer type to have a little type safety for the problem.
For a concrete example, code like that (able to be implicitly lowered because of information obvious to the compiler) is often used in fixed-point libraries. The fixed-point library though does those sorts of operations with the express purpose of having zeroed bits in a wide type to be able to execute operations without loss of precision (the choice of what to do for the final coalescing of those operations when precision is lost being a meaningful design choice, but it's irrelevant right this second). If you're about to do any nontrivial arithmetic on the result of that masking, you don't want to accidentally put it in a helper function with a u4 argument, but with implicit lowering that's something that has no guardrails. It requires the programmer to make zero mistakes.
That example might seem a little contrived, and this isn't something you'll run into every day, but every nontrivial project I've worked on has had _something_ like that, where implicit narrowing is extremely dangerous and also extremely easy to accidentally do.
What about the verbosity? IMO the point of verbosity is to draw your attention to code that you should be paying attention to. If you're in a module where implicit casting would be totally fine, then make a local helper function with a short name to do the thing you want. Having an unsafe thing be noisy by default feels about right though.