The stack is Django + Gunicorn / nginx, PostgreSQL and some Intercooler.js and vanilla JS to make the experience smoother.
I've also been trying to learn Elixir + Phoenix, as I find some of the concepts (e.g. LiveView) very promising.
Take [1] for example, it lists the source as [2]. However, the source as claimed in [2] is reported as:
"Department of Health and Human Services confirmed".
You might not want to turn it into a "wikipedia", but it would be nice to offer the following:
1) Wrong reports and right reports - a list of sources that suggest whether the claim is factually sound or not (this could be news reports like CNN). You can pull together many sources in a given topic to support a claim
2) "Authority sources", for example, DoJ or other official information distributors that are claimed in an article.
3) "Linked news sources" - These days, many news sources are rehashes of rehashes, sometimes there are 4/5 chains before you get to the "authority" source (reported by the Verge which was detailed by Tech Crunch which was first outlined by AReallyCoolBlog). It would be nice to have a "trail" of where the news/source/information came from, and how many links there are in the chain.
[1]: https://ontherecord.live/17
[2]: https://edition.cnn.com/2020/03/31/politics/drive-thru-coron...
1) Fake quotes do concern me, but for now I've settled on requiring a single reputable source, preferably the primary source (so in your example the DoH press release, if available, would actually be a better source). I've also provided a 'Report quote' button to let users flag fake quotes. I am planning to allow users to validate and/or post additional sources for quotes.
2) I've considered adopting a whitelist approach, but given I can't possibly know all the relevant sources for all the topics which can be covered, I'm just manually accepting quotes for now (the 'does it look credible to me?' test). Things might get even more interesting if/once I start getting quotes/sources on a topic I know nothing about or even in a language I don't understand.
3) Tracking down the primary source is definitely an issue I've already run into when trying to post the first quotes. Ideally, I would like to only accept 'primary sources', even though I know it takes extra work to provide them. Maybe implementing what you suggest in 1) would help with that.
Another feature I'd really like to implement is to automatically archive (on the Internet archive's Wayback machine) the source once I validate it, so that quotes don't lose their sources over time.
I disagree about promises though, I believe if you promise to (not) do something and then break that promise, you should be held accountable for it. Otherwise, people will keep promising more and more outlandish things, because you eliminate the downside.
I've tried to address this by only accepting quotes with a 'due date', so I can easily bring them back into the spotlight (top of the home page in the Open section) once they can be assessed.
These days it seems impossible to make sense of all the predictions from so-called experts. Tracking the accuracy of the talking heads is a great place to start measuring just how valuable their information is.
On the technical side, the logs show over 6600 unique visitors in the last 12 hours, with over 1100 of them (and 18k requests) during the busiest hour. I know that's not too impressive, but it's still good to see the basic VPS running the server didn't break a sweat (load average peaked at around 0.1).
In the mean time feel free to post content you're interested in, and also contact me (e-mail is in my profile) if you have any feedback. Thanks again!
https://pragmaticstudio.com/phoenix-liveview
Those are the videos I've been watching just to get a better idea of the concepts -- I agree, they are well made. I actually found them via HN about a week ago.
I'm thinking of reading this once I finish the videos: https://pragprog.com/book/phoenix14/programming-phoenix-1-4
Just yesterday I had a talk with my friend with the PR industry. We have been discussing how to select the best information sources and became to the idea that the future will be some kind of rating system based on pair experts-field of knowledge. After that, I start to think about creating some kind of automatic system for fact-checking facts in articles at least numerous ones (like based on world population and companies valuation and other public data). Do you have anything like this in your backlog?
Automating that sounds very attractive, but I wouldn't even know where to start. NLP? I've heard algorithms are sometimes used to automatically make trades based on news stories, so I imagine such a system could initially help moderators by providing suggestions/assessments?
Would be good to have a block list of people who can I safely ignore.
I've been thinking a lot about what makes a prediction/promise testable. So far I've summarized my conclusions in the 'How it works' page, but if anyone has any references on this I'd love to hear about it.
Like, they will go on cable news and say, I know about this stuff, and I guarantee that X will happen to Trump by June. And the commentator never says, "but you predicted that he wouldn't actually run for office (wrong), wouldn't win the nomination (wrong) wouldn't get elected (wrong), would get impeached by January (wrong), etc. Maybe you shouldn't be so sure in your predictions?"
These overly optimistic/negative press releases can also be damaging to public understanding over time.
I've started work on Reddit integration (/u/OnTheRecordBot -- it will be a bot you can call, similar to RemindMe!).
Tags are also already present in the backend, but not yet implemented. I imagine they will be come necessary once the number of quotes/topics increases.
- Wikipedia authors (because there's some kind of threshold for relevance there)
- Twitter (because I can automate the validation with a bot -- see @OnTheRecordBot).
Will follow with interest.
Edit: Also, we try to follow our guidelines on what a promise/prediction should be, you can find them on the 'How it works' page.
Also, there are no user accounts so all submissions/votes are anonymous, in case anyone wants to submit something they're interested in.