The URL parser integration was a proof of concept. It doesn't really improve stuff (aside from a slight security benefit from using Rust) so there wasn't pressure to land it; it was just a way of trying out the then-new Rust integration infra, and inspiring better Rust integration infra.
One of the folks on the network team started it, and I joined in later. But that person got busy and I started working on Stylo. So that code exists, and it works, but there's still work to be done to enable it, and not much impetus to do this work.
This work is mostly:
- Ferreting out where Gecko and Servo don't match so that we can pass all tests. We've done most of this already, whatever's left is Gecko not matching the spec, and we need to figure out how we want to fix that.
- Performance -- In the integration we currently do some stupid stuff wrt serialization and other things; because it was a proof of concept. This will need to be polished up so we don't regress
- Telemetry -- before shipping we need to ship it to nightly in parallel with the existing one and figure out how often there's a mismatch with the normal parser
It's not much work, but everyone is busy.
It's complex because URLs are complex; I believe this is the correct RFC: https://tools.ietf.org/html/rfc3986 It's 60 pages long.
(That said, page length is only a proxy for complexity, of course)
Never again, it's not just that the spec is 60 pages long but that the actual behaviour out in the real world is miles away from the spec, the web is a complex place where standards are...rarely standard.
Old IE versions had a hard URL length limit and were very picky with the characters in domain names, both limitations included as "security fixes" (which broke the standards).