Actually, the TV Guide example uses data that updates constantly. Every single interaction a user has is recorded and is used by other systems. The guide also changes based on user-specific views or preferences. Another example is Netflix's famous user-specific recommendations, which, changing constantly and whose algorithm is regularly fine-tuned, is a strategic feature. Even just playing a single show requires a dozen different calls to authorize its playing, based on a number of considerations.
Finding the number of retweets is also more difficult, because there's other data that gets recorded too. Not only do you have your own data now, you now have the data of everyone else that retweeted you. Is it your data, or theirs? Who is caching it, and how long? How does refreshing the cache effect consistency of each user's views? With decentralized applications you have to choose what kind of functionality you will support.
But, yes, in theory, if you allowed only one service provider to use some given data, you could rely on caching (read: holding a copy of data indefinitely) to a good extent. But as soon as you have multiple using it, you enter the extremely hairy world of multi-master high-availability strong-consistency replication. AKA, absolute hell. But this isn't even the most difficult problem to me.
We already had some good data access standards. The question is, why weren't sites using them to allow data interoperability/mobility? Answer: they didn't want to. So even if you create a technical solution for all of this, the best you will get is the Facebooks of the world publishing a read-only calendar feed, clunky, slow export tools, and single-feature one-way application integrations. Like we have now.
I don't see an ethical or social reason to decouple the data from the services I use, and I don't think the majority of the world population does, either. The only ethical/social concern I have is with the very existence of the service, which is a different concern.