The nature of something like this is that the cost to run it naturally goes down over time. Old links get clicked less so the hardware costs would be basically nothing.
As for the actual software security, it's a URL shortener. They could rewrite the entire thing in almost no time with just a single dev. Especially since it's strictly hosting static links at this point.
It probably took them more time and money to find inactive links than it'd take to keep the entire thing running for a couple of years.
My understanding from conversations I've seen about Google Reader is that the problem with Google is that every few years they have a new wave of infrastructure, which necessitates upgrading a bunch of things about all of their products.
I guess that might be things like some new version of BigTable or whatever coming along, so you need to migrate everything from the previous versions.
If a product has an active team maintaining it they can handle the upgrade. If a product has no team assigned there's nobody to do that work.
Best analogy I can think of is log-rolling (as in the lumberjack competition).
What does happen is APIs are constantly upgraded and rewritten and deprecated. Eventually projects using the deprecated APIs need to be upgraded or dropped. I don't really understand why developers LOVE to deprecate shit that has users but it's a fact of life.
Second hand info about Google only so take it with a grain of salt.
As such, developing a new API gets more brownie points than rebuilding a service that does a better job of providing an existing API.
To be more charitable, having learned lessons from an existing API, a new one might incorporate those lessons learned and be able to do a better job serving various needs. At some point, it stops making sense to support older versions of an API as multiple versions with multiple sets of documentation can be really confusing.
I'm personally cynical enough to believe more in the less charitable version, but it's not impossible.
Arrival of new does not neccessitate migration.
Only departure of old does.
But it's worse than that because they'll bring up whole new datacenters without ever bringing the deprecated service up, and they also retire datacenters with some regularity. So if you run a service that depends on deprecated services you could quickly find yourself in a situation where you have to migrate to maintain N+2 redundancy but there's hardly any datacenter with capacity available in the deprecated service you depend on.
Also, how many man years of engineering do you want to spend on keeping goo.gl running. If you were an engineer would you want to be assigned this project? What are you going to put in your perf packet? "Spent 6 months of my time and also bothered engineers in other teams to keep this service that makes us no money running"?
If you're high flying, trying to be the next Urs or Jeff Dean or Ian Goodfellow, you wouldn't, but I'm sure there's are many thousands of people who are able to do the job that would just love to work for Google and collect a paycheck on a $150k/yr job and do that for the rest of their lives.
This seems like a good eval case for autonomous coding agents.
You know how Google deprecating stuff externally is a (deserved) meme? Things get deprecated internally even more frequently and someone has to migrate to the new thing. It's a huge pain in the ass to keep up with for teams that are fully funded. If something doesn't have a team dedicated to it eventually someone will decide it's no longer worth that burden and shut it down instead.
How? Barring a database leak I don't see a way for someone to simply scan all the links. Putting something like Cloudflare in front of the shortener with a rate limit would prevent brute force scanning. I assume google semi-competently made the shortener (using a random number generator) which would make it pretty hard to find links in the first place.
Removing inactive links also doesn't solve this problem. You can still have active links to secret docs.
Back when it was made, shorteners were competing to see who could make the shortest URL, so I bet a brute force scan would find everything.
If they're have a (passwordless) URL they're not secret.