You can't get around the problem of manipulation if your trustworthiness metric for content will be the same for all people, as it is on reddit, hacker news, or Amazon for example. Having moderators just concentrates the issue into a smaller number of people and you haven't solved the central problem--manipulation is profitable.
But think of how we solve this problem in our personal interactions with other people, and this should be a clue for how to solve it with computational help. We have a pretty good idea of which people are trustworthy (or capable, or dependable, or any other characteristic) in our daily lives, and based on our interactions with them we update these internal measures of trustworthiness. If we need to get information from someone we don't know, we form a judgement of their trustworthiness based off of input from people we trust--e.g. giving a reference. This is really just Bayesian inference at its
core.
We should be able to come up with a computational model for how this personal measure of trustworthiness works. It would act as a filter over content that we obtain. Throw a search engine on top of this, sure, but in the end you'd still need to get trustworthiness weights onto information if you want it to be manipulation-resistant. This labeling is what I mean by manual curation. You can't leave that up to the search engine or the aggregator because those can be gamed, like the examples you gave for aggregators and SEO for search engines have shown.