Of course, that's not to say this is not impressive work - predicting what Twitter's proprietary algorithm will select as trending without direct knowledge of the algorithm, before it selects them, and before all the tweets that make them be selected are made is impressive, and no doubt not any easier than predicting more natural phenomena or emergent behaviours.
Could you point to any resources on time series analysis? While i am well familiar with supervised/unsupervised learning methods for tasks like classification, anomaly detection etc, analyzing time series is a different beast. And most machine learning literature (eosl?) doesn't seem to address time series data either.
But I think what you are asking is how such a method would come up with its own trends, given just a stream of tweets. This is a supervised approach (http://en.wikipedia.org/wiki/Supervised_learning), so for now, you would need to train it (possibly online) by giving it examples of what should be a trend and what shouldn't. It would be interesting to make it semi-supervised (http://en.wikipedia.org/wiki/Semi-supervised_learning) so that you would only need to provide a small number of labels.
It sort of comes down to the question of what's really being learned here? Are they modeling some inherent process of topics becoming popular (or memes spreading in a population) that could be used in other situations, or are they just modeling some arbitrary algorithm that twitter uses to mark some topics as "trends"? If they're just modeling twitter's existing algorithm, then it's less interesting because that algorithm already exists. Since they're able to detect the trend before twitter does (well, before twitter announces it anyway), then it seems like they're probably onto something more fundamental.
I would never guess that the pattern, when cut off just after 12, is indicative of a topic that's about to trend.