I know the UX is terrible right now (as in so unusable I don't even dare touch the server I set up myself). I know centralized solutions work out of the box, while distributed solutions barely work at all.
Of course bandwidth wouldn't solve our problems overnight. But if our ISPs were suddenly giving us the bare minimum, meaning symmetric bandwidth and a fixed public /64 IPV6 with no firewall, then at long last, we'd have a business model for distributed stuff. It would get more people working on that UX problem, and that would solve the problem.
Eventually.
That said, I don't see how we can claim a decent internet connectivity in the first place, since there is no usage to justify it. Looks like a chicken & egg situation.
Upload traffic is theoretically able to be precisely prioritized, where a long-term file upload uses any extra bandwidth but adds no latency to other traffic (besides unavoidable pktsize/linerate). But easy/standard configs provide best effort shaping even if you go out of the way so that it is able to discern interactive ssh from uploading ssh. (And nevermind download shaping, which really requires application-level involvement to get predictability).
Static IPs could help as a nice crutch, in that they would enable us to start with current server-authoritative protocols and patch/configure them to deal with one-nine uptime. But conditions here are set, and static IPs are not a panacea - I have the option for $6/mo, but don't take it. And I deal with the same fundamental undistributed service problems between my own machines on tinc.
So I don't see much point trying to lobby ISPs for larger upload (esp when it goes against their technical constraints. I do have to wonder how the world would be different if digital computers had been invented before radio, causing p2p to be dominant over broadcast), without the software to make it useful. And I think that software has to be developed with the current conditions and the understanding that most people don't want to have caring for a home server a prominent part of their lives.
(Now, maybe my tune would change if instead of 12M/1M DSL, I were on GigE where my upload throttling would be done by "the cloud" ;))
To be a peer on the internet, you need to be able to be server as well as client. With current protocols, this means a public static IP (or range thereof), symmetric bandwidth, and no spurious restrictions. The network must also be "clean". No deep packet inspection, no transformation of payload, no NAT, no nothing. Just a dumb pipe, who treats packets equally —for some definition of "equal". And if you have difficulty handling a particular kind of data, then just install bigger pipes. Congestions should be few and far between.
Some people would go even further, arguing that you also need an Autonomous System number. I believe this is less important, though I tend to agree. People should be their own ISP if possible (hint: it's difficult). One way to do it is set up a non-profit, and be a member.
---
Now let's get more technical. The main reason why consumer upload should match consumer download is this:
Worldwide upload = Worldwide Download
(Modulo lost packets and multicast.)
> I don't think there will be any demand for greater upload bandwidth until using that bandwidth isn't an event that totally swamps your connection and requires manual coordination with other users.Actually there's already a huge demand for upload right now: YouTube. Every time someone watches a video, Google must spend that much upload —without requiring the attention of the account owner. This makes the network unbalanced: lots and lots of data are pouring out of Google's network to the consumer networks, and little ever goes back.
This has economic repercussions. Basically, ISPs connect with each other in two ways. The first is Alice and Bob having a peering agreement: when Alice can send data to Bob, and Bob can send data to Alice, and no one charges for anything. Then you have transport ISPs, which charge for the data you send through them. That would be Eve charging Bob for transporting a packet he wants to send to Alice. (When Alice and Bob aren't directly connected.)
Normally, an ISP wouldn't charge for any data for which it is the recipient, because that's data it wants, after all. But that's not exactly the case of consumer ISPs, whose purpose is to transport the data to the consumers. So they charge the consumers themselves. Now here's the problem. If the bandwidth was evenly distributed, the ISP would send roughly as much data as it receives. Perfect for peering agreements. Instead, they keep receiving more and more data. So they're tempted to act like a transport ISP, and begin to charge for the data they receive. Except that won't fly, since their network is the destination of that data! Which is probably why they try the "middle ground" of merely slowing down data that isn't paid for, destroying any illusion of Net Neutrality in the process.
There is also a technical reason: a really peer to peer network would have the data travel shorter distances, instead of say, going back and for the capital even when two neighbours are talking to each other. That means less Tbm (TeraBytes multiplied by meters), which is ultimately cheaper.
---
> I have a static IP option for $6/mo
Your ISP is a crook. Static IP cost them nothing: most people are connected at all times anyway.
> So I don't see much point trying to lobby ISPs for larger upload (esp when it goes against their technical constraints. […])
It doesn't go against their technical constraints. If it ever does, it is because of constraints they put in place years ago with the advent of this infamously asymmetrical DSL. It's their fault, and the onus is on them to update.
---
Overall, try to step back from your own situation, and watch the big picture. This often yields different answers. (Example: if Joe Random wants more money, a way to get it is to become "more competitive". But if everyone gets more competitive, Joe Random will rank just as before.)
So, regarding the Chicken and egg problem… Sure, if you suddenly had a static IP and smoking fast upload, you may not behave any differently. But if everyone gets that, you may have a market, and it may change things. If anything, it would make overlay networks more practical, and goodness knows what innovation could come out of that.
I don't see the way forward being based on IP addressing (+dns) as identity, which is ultimately what you're talking about. First, the end to end principle arose out of engineering concerns, and IP does nothing to preserve data opaqueness against a network that wishes to categorize traffic. And given that there is little money in transporting commodity bits, yet some of those bits are quite valuable (work VPN session..), there is an ever-present economic incentive for discrimination.
Referencing UL=DL doesn't really make sense. Even with an ideal buildout of multi-homed homes mesh-connected through each other, there's still going to be a network "core" that has more long-haul bandwidth than the outskirts. If I wish to publish a file to many people, it makes more sense to send that data once to the core, and fan out through there (whether by a server, multicast, or some new method).
My ISP is Sonic.net - I wouldn't call them crooks, and given the competition wouldn't begrudge them an administration fee on a static IP. I said that to point out that it is not even worth $6/mo to me, and combined with their deletion of logs after two weeks, having a fixed address is actually a net-negative from my perspective.
So back to the real topic.. I'm definitely trying to analyze the big picture, and I've come to the conclusion that IP-as-identity is a complete red herring. I don't particularly see how it would encourage overlay networks, when the whole idea of an overlay network is to deprecate the underlaying network protocol to layer something better on top. Overlay networks work just fine over dynamic IPs, and only need a few underlying long-lived identities for rendezvous.
The way I see it, the real root of the problem is protocols based on authoritative servers which place undue importance on the reliability of individual hosts, and therefore their network links and administrators. As long as we're reliant on these, then the benefits of locating them closer to the core and having them cared for by a third party is going to outweigh the downsides.