I'm talking about new projects at much smaller companies, or much smaller departments within the company that don't yet know if the product they're building is going to even work.
Not to mention; I have the same freedoms to try things out with a $10/mo DigitalOcean droplet as I do EC2/Lambda/ELB/RDS/S3/etc... If something doesn't work, delete the droplet and start over. To that degree, it's even easier, cheaper, and more free to just test something out in Docker containers on my laptop.
So, respectfully, I'm not buying the "freedom" argument. That's not even the selling point AWS is pitching.
For my last gig I was hosting the web infrastructure for a 5B/yr enterprise with a low-5 digit Alexa Rank on roughly $100/mo in EC2 instances. That was after converting the property to a static site built with jekyll (you can do pretty amazing, magical-looking things with front end JS calling out to apis and nginx SSI while still serving static conntent) and a hosted wordpress ($25/mo) for the content editors (and CircleCI's free tier for builds -- woop woop). And we were insanely over-provisioned with those t2.smalls.
Procurement sucks. Having to test out new systems when you need to buy new hardware 2 years later also sucks. Having to wait behind much larger customers when Intel has a Xeon supply shortage sucks (2018 sucked if you had to buy servers).
The slightly more charitable answer is that having your infrastructure automatically scale and adapt to the workload, and watching it take off is a really fun and exciting moment, and people want to experience that, and in the rare event that it does happen and you suddenly need to handle 10-100x more traffic, having things auto-scale up and then back down means you (in an ideal world) get to watch it excitedly, instead of furiously running around spinning things up and putting out fires that arise when you start operating at larger scales.
Is it overkill to build your business like that from the get-go? Almost certainly (precluding the situation where you have some kind of guarantee of incoming load), but people (want to) do it anyway.
What you're missing is that cost/complexity-wise, LightSail and EC2 are equivalent. The only difference between them is your interface to it. LightSail doesn't give you some pretty necessary knobs to kick things into a working state when the EC2 instances are having a burger. In fact, the last time I used LightSail and had a problem with unavailable instances, I just ran the ec2 api commands against the lightsail instance IDs to solve my problem.
DigitalOcean has some networking properties that make it extremely undesirable for some use cases and Linode is frequently the target of massive global DDOS. I remember well a few years ago the Christmas Eve Linode DDOS because I had to work 20 hours that day.
Thankfully it was only 20 hours because we were already in the process of moving off of Linode and we just decided to flip all of the switches to serve out of AWS. Most of the time was spent waiting on DNS TTLs.
Instance costs across all of the cloud providers are pretty competitive. Where AWS and Google Cloud and Azure are more expensive are in those "extra" services where you would be paying people to run infrastructure (elasticsearch, sql databases, etc). DO, Linode, etc, don't give you that option -- it's not an apples-to-apples comparison....and in most cases you shouldn't use some of these things. Definitely no service where you can't just pick up and go use some other hosting tomorrow. Cloud vendor lock-in is real.
(DigitalOcean, to its credit, has offerings in those spaces -- they have an object store, they have hosted databases, they have a container service. And I use DigitalOcean for some personal products. But you can actually run a Postgres DB on AWS Aurora Serverless for cheaper than you can run one on DigitalOcean, depending on your workload. It's not obvious that DigitalOcean is a better choice there.)
When you have a real finance department that vets each third party or have some governmental requirement to get three quotes for each new service (part of requirements set out to minimize grift in sourcing), having a single bill that can be expanded to include new projects and services makes it a little simple.
This causes a problem during cost cutting because unless it's a priority your bill will spiral out of control.
It's my understanding that you're describing how Amazon worked in the 90s. It's my understanding that Amazon afterwards moved to an internal cloud provider, and afterwards spun off AWS which worked in parallel with Amazon's internal cloud provider.
Not at all. Amazon's transformation took years. Up until last year, amazon.com was Oracle's biggest customer. AWS development far outpaced their actual use of it.
Create an app with a Dockerfile in the root of the project, and it's a git push away from deployed. The LE extension adds https, and it pretty much just works. Connecting to DO hosted databases and object storage is pretty smooth as well.
It doesn't auto-scale like FaaS options, or offer a lot of the other bits one may want/need, but it does good for a lot of the one-off things I'll play with.
You can generalize the infrastructure to run 99% of businesses needs and write that Terraform code only once. You can bring that with you everywhere and take care of the basics in seconds and you can sleep at night easily without a pager going off.
It enabled the branding/marketing teams to launch sites without waiting for months of tech backlog. it was beautiful
You have a mature product, with a proven market and known scalability demands.
The exact opposite of how this thread started and somehow this is a good anecdote refuting it!