1. My workload is an early stage enterprise SaaS where traffic is not the limiting factor for our growth. If you’re planning to push a lot of bandwidth you probably want to use something else.
2. Like I said, it’s that I don’t have to spend even a minute thinking about how I’m going to deploy my app. It just listens on our git repo and runs the NPM standard build and start commands to run the app, so I don’t need to do any vendor specific configuration. We use NextJS as our web framework, so we just write pure web frontend/backend code and automatically everything’s hooked up so that it’s served with serverless infra (so I don’t have to care about scaling or machine resources ever), with a global CDN that caches the API responses we return by just attaching a Cache-Control header, which is very transparent. On top of that Vercel instruments deploys for all of our git branches so that I can see what my teammates do directly in their PRs, once again with no configuration. And if the pricing becomes an issue, all our code is just following web standards and next to no vendor-specific code exists in the app, so I can move off it any time, but really I don’t see that happening even if our SaaS 100x’d in size (which is the aim).
I really have trouble seeing how we can do less work on nor get less locked into a specific infra this way. I’m sure for resource intensive workloads it’s not ideal, but for ours, optimizing for resource efficiency by running our own stack of servers is a case of YAGNI; the simplicity of the DX is totally in our team’s favor.
Not really sure why this argument wouldn’t make sense by now, Heroku has always been expensive and yet it always has been popular since it’s so much simpler than dealing with the choice paralysis and complexity of either using the full AWS system and of running your own servers.