Her paper doesn't pass the sniff test for me whatsoever when it comes to security analysis. She spent close to no time analyzing the primitives she introduced (and with no proofs or rigor!), meanwhile the thing is 58 pages because she takes the time to explain what "determinism" and "seeds" are to her audience.
"Exposition" is, in my opinion, a fully valid reason to reject a paper. I'm not going to sit and read your 60 page paper that could have been compressed to 10 pages if you just got to the point and assumed your audience understood the field well enough to assess your results. Rewrite it and send it back without the assumption that your audience needs to be reminded of everything they'd need to learn just to properly assess your result. It's not as though they rejected the paper on empirical grounds without a meritocratic review; they rejected it because they have a finite amount of time and (speaking as someone in the field) it's sort of annoying to read after page 10.
I think academia frequently gets lost in the ivory tower and loses touch with what an accessible paper looks like; this is not an answer to that, it's a swing in the other direction, where papers with truly novel results will suddenly be hundreds of pages and tens of pages of setup.
We're already there with Inter-universal Teichmüller Theory (https://en.wikipedia.org/wiki/Inter-universal_Teichm%C3%BCll...), the entire mathematical field singlehandedly created by Mochizuki to prove the ABC Conjecture. Mochizuki worked in isolation and astounded the world by revealing all of this at once.
Now, the world's top mathematicians have been so impressed with whatever they've managed to understand from Mochizuki's papers that big efforts are being made to unravel it, getting Mochizuki to teach lectures, etc. As best as I know it, the entire thing hasn't been independently verified -- it's such a tall stack of novel mathematics.
Now: we can chastise Mochizuki for not playing within the ordinary rules of math research (publish ongoing research, etc.), or acclaim him as a genius having produced fundamental, discontinuous advances in his field... or we can do both.
I wouldn't blame people for holding off on "using" Mochizuki's results (they're too abstract for that, but anyway) because the whole thing is so obscure still. What's more: I wouldn't hold skepticism on Mochizuki as an example of academia being too self-referential. The academic rat race is supposed to keep these things from happening by imposing some structure on the production of knowledge.
In patents there's the concept of enablement: did the inventor enable/teach the idea? Yes she did. Did she flesh out every idea? No she didn't.
This was rejected on style grounds. Ignoring it then on 'security analysis' is not improving matters; it certainly isn't helping security. Do the security analysis.
Peer review journals and conferences have (typically explicit) style guides. The argument that peer review arbitrarily rejects papers unsubstantively is a strawman; peer review councils do not maintain that meritocratic results-driven analysis are the only barriers to entry.
>Ding it for not analyzing her primitives but don't then call that security analysis. Do the security analysis.
Authors introducing novel results with cryptographic considerations typically perform their own analysis and publish that in the paper with the result. I was not referring to my observation of her own insufficient analysis as if it is a formal analysis on my part, I was observing (correctly) that she didn't do enough of her own formal analysis. There is a modicum of author-provided proof-based assurance that 1) is considered in the peer review process and 2) forms the foundation for formal cryptanalysis by peers in the community. In other words, there isn't yet enough for cryptanalysts to attack (or more precisely, the author has put the onus of analysis on other researchers, instead of providing specific, rigorous claims which they can empirically refute).
It comes down to respecting time. In attempting to be accessible, the author is not being respectful of other researchers' time. We don't need to have the birthday paradox explained to us, we've understood that since Intro to Statistics. The paper is 58 pages because, "think of the unwashed masses who can't understand our work!", but in attempting to appeal to those beyond the ivory tower, it's just become circuitous and over-indulgent. She didn't have to write in this style to make it more accessible (and the relevant mathematics has a lower bound on how accessible it can be, anyway).
It is not respectful of qualified cryptanalysts to not even provide a precise set of proofs for your claims. She uses phrases like, "PCG is a middle ground between security and performance." We cannot analyze that, and it's not for cryptanalysts to review every single claim that comes across their desks. This is why the author of xorshift+ didn't even provide a cryptanalysis in his scathing review.
If you're not going to sufficiently specify your claims, don't be surprised when the academic community ignores them (even if they're valid!). You are arguing your point from a first principles approach to meritocracy and the fairness of style-based assessment; I am arguing that as a practical matter in the research community, there is a reason this is not how novel cryptographic primitives are introduced.