IIRC (and I fear I don't RC), the web industry hated this idea because they couldn't control and monetize the conversation around their content. Of course, we now do this exact thing with sites like Slash Dot, Digg, and Hacker News, only there is this step of going to the social site to see the popular links.
If you go to a page and wonder what HN has to say about it, you have to do a search for the URL by yourself. (Perhaps there is a browser add-on that does this?)
Taking comments to a place where you could annotate the document and not just discuss the page as a whole is the next step for sites like HN. Of course, there is the pesky problem of the sites hating this and using every legal tool in their arsenal to prevent you from presenting this as an interface.
Maybe with the current generation of browsers there is another way to achieve the same effect.
On websites with lots of visitors this would likely not work but then you might limit the visible entries to those from some slice through the population based on certain criteria (location or other demographic data).
However rbutr isn't really what you've described here. What you've described is either
Oh, also, Hypothes.is had previously put together this spreadsheet on Google Docs which lists all of the previous efforts to make web annotation applications. It hasn't been kept up to date though so I keep a local one up to date now, just in case this is ever lost: https://docs.google.com/spreadsheet/ccc?key=0Aujm_HldNh4WdHJ...
So it says about 10,300 active plugin users.
A more appropriate plugin for the OP's question is Hypothes.is. They're open source, non-profit and have raised over $1.5 million. They have about 7,300 users according to the chrome webstore: https://chrome.google.com/webstore/detail/hypothesis-web-pdf...
Medium has annotations too, and IIRC newer versions of IE have this functionality built-in so that it works for any site.
That said, as the OP points out, web pages (and your annotations with them) change or disappear all the time, so if you need to keep something as a reference in your "memex", you may want to scrape it (Evernote etc).
I think this post takes a narrow view of what constitutes a trail and link. From listicles, Pinterest boards, and even your Facebook feed, people use the internet to connect to different links and articles. This process of curation occurs at personal (Pinterest) and at large (Wikipedia).
There are, indeed, people who make a living from annotating and associating articles and information (see the HN front page and brainpickings).
This is a hint. People interested in this topic should study the work of Ted Nelson and Project Xanadu. Ted's visionary ideas permeate our thinking today.
Yes and no. A mutated version of them that turns out to work better, perhaps.
Taking Wikipedia as an example, readers build up an article piece by piece to create a long text article. However, much of the information inside the article can be better represented as data. Articles are rigid, and the text inside them cannot be manipulated easily. For example, instead of a long article, a biography can be represented as a timeline of events. That timeline of events (as data) can then be manipulated (filtered and sorted) by the end user to give whatever view they want. It's not just a matter of following a trail (as the Bush text says), but of collecting the information as you go.
Instead of acting as a database of facts or events, Wikipedia acts like a book (a paper encyclopedia). Sure it has interlinked pages, but that's where it stops. Because it acts like a book it seems acceptable to have its external links represented as footnotes in a reference section under the main text. Federated wiki runs into the same problem too because it's focus is also on articles -- the result of collaboration is a page that cannot act as data.
But the web is not a book and both articles and footnotes (and lack of other multimedia features) are not native to the way it functions. I think there are many better solutions to this problem than going back to footnotes. The medium is the message and solutions need to stop trying to make the web work like a book, but to make it work for the web.
I have been working on much of the above on my site. I got round the footnotes issue by placing the source link on the verbs in the text, while internal linking is handled by nouns. http://newslines.org/blog/wikipedias-broken-links/
I'm also struggling to see how the Newslines implementation would be a better solution in the case of the Tom Hanks article referenced in your blog. I mean, would linking the word "immigrated" to a seven minute YouTube clip mostly not on the subject of Tom Hanks' ancestry reduce confusion? I think it would make it worse. The problem isn't the extra click, the problem is that the source is a fraction of a seven minute chatshow clip.
I agree with your points on timelines and data, but in fairness Wikipedia isn't and shouldn't try to be the whole internet. People who have the programming/UX skills to a specific subset of content appropriate form and functionality are always going to be a step ahead of random strangers tweaking text.
I don't agree that the footnotes system is good for the reader or editor, because it obfuscates the link between the text and the source. It is very easy to add biased information to any wiki page. Let's say someone writes "Donald Trump calls McCain a war hero" but the reader thinks, "Hmm that's not what I heard". On Wikipedia they have to go to the footnotes and then click through, losing track of where they were at on the main article. How many people will do that? Not many. So the reader gets misinformation, and loses trust in the site, and the bias remains.
Instead if a link direct to the video is added then the person will know straight away. Even better if the video is embedded directly, but that's another story.
I'll echo eponeponepon's comment: very sincerely, it's an elegant approach, but what happens with more than 2 links? In particular, (and this is already a concern with only the two links), do you have any UI cues that those links are separate? Because I'd be concerned (I'm paranoid about this on reddit) that, without additional UI cues, it seems like they're the same link. Amusingly I had this exact problem on your blog post, even though I'd already read the explanation: in the final "Barack Obama signs Minimum Wage executive order" I was expecting the orange link highlight on "Barack Obama signs" to be one link, and the black "Minimum Wage executive order" to be another, despite having just read subject vs verb linking.
Might I suggest two link classes with slightly different CSS colors? If you're already using automatic page formatting and link generation, you could just alternate between the two classes, thereby providing an immediate visual cue that they're different links.
It also only helps if the reader knows of this (and in my case, that's pretty much been me).
That doesn't mean such efforts aren't interesting - see eg:
But I think it is important to realize that both can be useful. Perhaps there is room for something that blends traditional hypertext/hypermedia and articles better - fundamentally I think such a system will be more like a complex (object) programming system than text. It would need to be somewhat self-organizing (eg: do how would you like to present a list of published works; on a timeline? With short reviews? In a map?) - and it would probably have similar issues that a large multi-user codebase has.
I suppose lively-kernel.org moght be one approach to making something like that (smalltalk-like js run-time in the browser with webdav for code/data storage).
I got round the footnotes issue by placing the source
link on the verbs in the text, while internal linking
is handled by nouns.
That's a very elegant idea; I congratulate you very genuinely. I wonder where you'd head if you had a third category of links, though?The single biggest problem that gnaws at me is what to do when a single (hotspot|link|icon) needs to point the reader to two completely unrelated remote resources (e.g. a scholarly edition of some text where notes on the original manuscript and editorial additions refer to the same segment of text - or worse, overlapping segments.
In the print world, the only options are this kind of thing[1][2], or eschewing markers in the text in favour of references in the notes - but carrying either over when so many more options are available electronically feels lumpen to me. Drop-down menus are an obvious solution, but have huge dependency on the display agent to behave properly.
[edit: layout, phrasing]
Person A claims that Person B did something, but person C claims otherwise.
We use a different source links on each italicized verb. This works for us, due to the way we write posts, although there are some edge cases. It'd be interesting to see if it works for other sites with different requirements. The principle is to avoid footnotes and try to match the links meaning directly with the text. As the other poster mentioned, there perhaps should be a color distinction, or some other identifier.
A file separate from the content could contain links and each one could be anchored to the top, bottom, or a particular phrase using ordinary regular expressions.
/ordinary regular expressions/href=https://en.wikipedia.org/wiki/Regular_expression
/$/text="This link is at the bottom";href=http://news.ycombinator.com
An observation: promoting 3rd party links to first-class status need not deny the author the ability to create hotspot links. I, for one, would configure my memex to always show the author's own links (of any style, as an overlay) and also any links that my friends and family might have created.I'm not sure how my memex could locate links made by people I don't know. But, I imagine that the federated wiki people or the DHT people or the BTC people might have some ideas.
For locating links/comments from people you don't know, I'd imagine a service like Google. Something that indexes the entire web and answers queries about which links overlay this document. I'd think they'd have to be ranked somehow. (Technically, Google already does this with "link:http://..." queries, but standard HTML links don't reference document fragments.)
[edit: you're right though; personally I'd have never bothered with the original title, and it really is a very thought-provoking piece - as so often anything is that talks about Engelbart, Bush &c.]