this would not help without changes in at least one of the other layers I mentioned (Compositor, Window Manager, Apps) to either ignore or handle these eventsYes, certainly. But it's basically one or two lines of code with zero configuration knobs:
if (ev->is_synthetic && focus_changed_since_last_nonsynthetic) return;
Compare this to the current state of affairs, where every toolkit/compositor/app has to reimplement momentum scrolling on its own. That is multiple pages of fairly tricky code and several configuration knobs -- all of them replicated (differently) in each different thing that sits on top of libinput.
why should it be implemented in libinput?
Because libinput is the "narrow waist" in the input stack.
Doing this in libinput puts all the new configuration knobs, and all but one or two lines of code ("if focus changed then drop") in the one place where they can live without having to be duplicated.
Any higher up the stack and you end up replicating this functionality.
Issues would spread over two projects, instead of one and getting pushed back and forth.
I don't think so. If the problem is that a marked-as-synthetic event is/isn't being dropped when it shouldn't/should, that's the problem of some layer above libinput. Any other problem is libinput's. I know nobody wants to own these problems, but we can't make them disappear -- somebody has to own them. If they are owned at or below the libinput layer we can fix each bug once and be done. If we fix them anywhere above the libinput layer we will end up having the same bugs occur in multiple independent implementations and they will have to get fixed independently.
in my opinion it would have been better to implement it everywhere (with optional disable) instead of nowhere
I don't agree with this either. This "multiple reimplementations of the exact same thing" is becoming entrenched! We could get stuck in this situation, permanently. This isn't a question like "what should a titlebar look like" where different toolkits can legitimately have different opinions about it.
In my opinion the App layer is the wrong place
I agree.
Some part of this has to be done somewhere at-or-above the compositor layer, since those are the only layers that are aware of focus changes. Unfortunately there seems to be no traction for e.g. wayland to require that all compositors do this. So the replication gets amplified even further by pushing it up into every compositor or toolkit. That's why it's so important to limit the size of the bit that gets pushed up the stack to being two lines of code.
Perhaps a viable compromise would be to insert a new layer above libinput. For argument's sake, call it "libgesture" (*). It's responsible for consuming libinput events and emitting synthetic events like momentum scrolling. It would have exactly the same API as libinput, except events would have an additional `is_synthetic` field. I've actually built something similar to this (but at the kernel evdev level) for my own use.
Perhaps that would make everybody happy.
(*) This is unrelated to `libei` -- I have this sinking feeling that many apps (especially chromium) will reject any event that smells "emulated" to them "because muh bots" or something stupid like that. Trying to use libei for this is likely to backfire once the chromium devs decide that "emulated" means "bad for Web Environment Integrity". The indicator for gesture-synthesized events like momentum scrolling really needs to be completely separate from any marker indicating that the event was in any way emulated.*