The system has to be brainless to use if it is going to succeed at all, and it has to have enough initial buy in to be worth bothering with initially.
Wikipedia as you have pointed out has arcane crazy policies that restrict it from having user generated content. It claims to only be a replication of printed articles. That is lies, but it is a pain in the butt to actually get valuable content into it and not have it removed.
Usenet suffers from not having structured data, being just chunks of text.
Github suffers from refusing to host binaries except for in weird cases ( there is a binary build hosting I think?? )
Bittorrent suffers from still being attached to hostnames for the most part ( I'm aware of the distributed system it has too but you typically can't get many seeds through it )
Most systems that allow content suffer from having copyrighted data on them. The goal of this would be for it to be publicly known that there is no copyrighted data, such that universities and such would be willing to run the distributed server, and censorship could be stopped by enough people running it globally.
I'm not focusing on funding of libraries so much as the fact that they have an established set of categories to put information into. There is no such standardized list of categories for websites to go into, and the creation of such is important to the future of the internet.