The paper you link to counts as a publication, but its reputation stands on its own, it has nothing to do with arXiv as a venue. Ideally, that's how it is for all papers, but it isn't, just by publishing in certain venues your paper automatically gets a certain amount of reputation depending on the venue.
We require a method of filtering such that a given researcher doesn't have to personally vet in excruciating detail every paper he comes across because there simply isn't enough time in the day for that.
Ideally such a system would individually for each paper provide a multi-dimensional score that was reputable. How can those be calculated in a manner such that they're reputable? Who knows; that exercise is left for the reader.
In practice "well it got published in Nature" makes for a pretty decent spam filter followed by metrics such as how many times it's been cited since publication, checking that the people citing it are independent authors who actually built directly on top of the work, and checking how many of such citing authors are from a different field.
We do require such a method. Isn't that what AI is for? Strictly working as a filter. You still need to personally vet in excruciating detail every paper you rely on for your work.
PageRank was a decent solution for websites. Can't we treat citations as a graph, calculate per-author and per-paper trustworthiness scores, update when a paper gets retracted, and mix in a dash of HN-style community upvotes/downvotes and openly-viewable commentary and Q&A by a community of experts and nonexperts alike?
Just one example off the top of my head. How do you handle negative citations? For example a reputable author citing a known incorrect paper to refute it. You need more metadata than we currently have available.
tl;dr just draw the rest of the fucking owl.
Upvotes, downvotes, and commentary? That's extremely complicated. Long term data persistence? Moderation? Real names? Verification of lab affiliations? Who sets the rules? How do you cope with jurisdictional boundaries and related censorship requirements? The scientific literature is fundamentally an open and above all international collaboration. Any sort of closed, centralized, or proprietary implementation is likely to be a nonstarter.
Thus if your goal is a universal system then I'm fairly certain you need to solve the decentralized social networking problem as a more or less hard prerequisite to solving the decentralized scientific literature review problem. This is because you need to solve all the same problems but now with a much higher standard for data retention and replication.
Very topically I assume you'd need a federated protocol. It would need to be formally standardized. It would need a good story for data replication and archival which pretty much rules out ActivityPub and ATProto as they currently stand so you're back to the drawing board.
A nontrivial part of the above likely involves also solving the decentralized petname system problem that GNS attempts to address.
I think a fully generalized scoring or ranking system is exceedingly unlikely to be a realistic undertaking. There's no problem with isolated private venues (ie journals) we just need to rethink how they work. Services such as arxiv provide a DOI so there's nothing stopping "journals" that are actually nothing more than lightweight review platforms that don't actually host any papers themselves from being built.
No, it is not. Don't throw the baby out with the bath water. Zenodo is centralized, and that is fine. A system hosted by CERN would be universal enough for most purposes.
The truth is, most papers cannot stand on their own, they need a reputable venue. While it is difficult to get into Nature, it is much more difficult to actually contribute something substantial to science. That's why we don't have a system like that.