If you don't mind educating me a little, what you have described is the exact opposite from what I have heard, both from people inside the telecom industry and outside of it. Feel free to correct me at any point, but this is how I understand the current state of things:
There are two observable forms of "internet speed": latency and throughput. Latency is the time it takes for a packet to go from your computer to the server and back, and is (usually) measured in milliseconds. The other type is throughput, the speeds that are usually advertised by ISPs, measured in mega(bits|bytes) per second.
From my understanding, there is little that can be done with latency. You can minimize it, but it is prohibitively expensive, for not much of an increase, especially when you consider that a good portion of the observable latency comes from the server processing the request. Instead, ISPs (and end users) focus on throughput as the primary measurement of speed.
Throughput is relatively easy to increase. The bigger the pipe, the more requests per second your computer can make, and the more data can come down the pipe at a time. Every few years some new advances are made, and you can lay down a new pipe (albeit at rather high cost) and double your throughput.
So, with the two types of speed, there are two types of traffic. One is loading webpages, and regular internet surfing. You may a request for a relatively small set of files (the website you are trying to view, usually), and the magic fairies in the tubes find it and bring it back to you. On the other hand, you have throughput intensive traffic. Things like VOIP, Streaming music, or any kind of large downloads. P2P falls into this category, but its a little different. With largely transfers like that, the latency becomes insignificant. After the transfer begins, it is going to go more or less un-interrupted until it finishes. The only real effect that the network speed has on it is the throughput, determining how much of that data can come down the pipe at a time.
Now, from my understanding (as someone not in the field), the problems arise from traffic trying to go over the backbone. While the routers and such supplying to backbone are more than sufficient to keep up the latency, the limitations come when you get a whole lot of people streaming videos (for instance) at the same time. The backbone, while it can switch packets quickly, has (again, from my understanding) a relatively slow throughput. While it is more than sufficient most of the time, the problems arise when everyone wants to see that new hamster dancing video on Youtube, the backbone just does not have that kind of throughput to serve much more than a fraction of the users at the full capacity of their connection at a time.
That is, how I understand it, the root of the problem. That would be simple to fix, if anything could possibly be simple to fix when it comes to networks. The problem arises from the fac that there isn't one big mystical Backbone owned by AT&T or Comcast, that everyone else connects to. Instead, it is made up of companies like Level3 and Cogent, which have peering agreements with each other. Since they're both companies, they both have bottom lines to meet, and they're primarily out to make a profit, not to make a nice internet for the users.
That brings me to the other point. I was under the impression that other places in the world had received far FEWER subsidizations of their network, and that prices were kept down, and speeds up, because there was far more competition in the markets. Admittedly, that is largely because the connect areas in Asia and Europe are considerably smaller, and cheaper for the first investments. Becase it was cheaper to get started, more companies did, so they got considerably larger and more complex peering networks, Each network had to have an edge, so they were either faster, or cheaper.
While the network companies want to expand their networks to more areas, and serve more people, what the users want is a faster connection. This leads to the whole problem with net neutrality. The only way for the consumer ISPs to get their networks faster for the users is to either lay lots of more fiber of their own, spend a lot more on faster peering agreements, or just try and stop the users who use the most bandwidth to cut it out. As far as I can tell, the primary argument against Net Neutrality is something along the lines of: The peering centers do it to us, why can't we do it to the users?
So, to summarize: my understanding is that the big problems arise from peering agreements. It is in no ones best interest except for the end user to heavily increase the backbone connections. It has no advertising potential like increasing speed at the edges does, and it has little profit potential, since most of the large peering companies have a near monopoly for their regions/portions of the internet. They can simply say: Pay us for our slow speeds, or go lay your own wires. There is no real reason to upgrade the network, because no one else has. You can't get an increase if only one segment is faster.
Did I get at least fairly close to it?