So well put, my good sir, this describes exactly my feelings with k8s. It always starts off all good with just managing a couple of containers to run your web app. Then before you know it, the devops folks have decided that they need to put a gazillion other services and an entire software-defined networking layer on top of it.
After spending a lot of time "optimizing" or "hardening" the cluster, cloud spend has doubled or tripled. Incidents have also doubled or tripled, as has downtime. Debugging effort has doubled or tripled as well.
I ended up saying goodbye to those devops folks, nuking the cluster, booted up a single VM with debian, enabled the firewall and used Kamal to deploy the app with docker. Despite having only a single VM rather than a cluster, things have never been more stable and reliable from an infrastructure point of view. Costs have plummeted as well, it's so much cheaper to run. It's also so much easier and more fun to debug.
And yes, a single VM really is fine, you can get REALLY big VMs which is fine for most business applications like we run. Most business applications only have hundreds to thousands of users. The cloud provider (Google in our case) manages hardware failures. In case we need to upgrade with downtime, we spin up a second VM next to it, provision it, and update the IP address in Cloudflare. Not even any need for a load balancer.
People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes.
My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup.
Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ).
Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere.
So you really have three obvious use cases:
a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.
However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work.
So having everyone use the same deployment model (and that’s typically k8s) saves effort. I don’t like it for sure
This is certainly the case from all the third person accounts I hear. Online. I never actually met a single one that is like that, if anything, those same people are the ones that are first to tell me about their Hetzner setups.
If you'd know Kubernetes, you know not to use it. I say that as someone who used to do consulting for it.
The reality is that yet again "making money" completely collides with efficient, quality, sane productive work.
For me one of the main reasons to leave that space is that I couldn't really deal with the fact that my work collides with a client's success. That said I have helped to get off that stuff and other things that they thought they needed, that just wasted time and money. It just feels odd going into a company that hired you to consult on a topic only to end up telling them "The best approach for you is not doing that at all". Often never. Like some people thought "Well, if we have hundreds of thousands or even millions of users" and the reality was that even in these scenarios if you went away from that abstract thought and discussed a hypothetical based on their product they realized that they'd still be better off without it. Besides the fact that this hypothetical often was in a future that made it likely that they said they'd likely have completely different setup so preparing for that didn't even make sense.
I think a big thing related to that was/is the microservice craze where people end up moving to a complex architecture for not many good reasons and then they increase complexity way faster than what they actually deliver in terms of the product, because it somehow feels good. I know it does, I've been there. When in reality the outcome often is just a complex mess with what could have been a relatively simple monolith. And these monoliths do work. And in the vast majority of cases they are easy to scale, because your problem switches from "how do we best allocate that huge amount of very different services across our infrastructure" to (for the most part) "how do we spin up our monolith on one more server" which tends to be a way easier to tackle service.
And nothing stops you from still using everything else if you want. Just because it's a monolith doesn't mean you need to skip on any of the cloud offerings, etc. For some reason there seems to be that idea that if you write a monolith you are somehow barred from using modern tooling, infrastructure, services, etc. Not sure where that comes from.
I'm not surprised even in the slightest that DevOps workers will slap k8s on everything, to show "real industry experience" in a job market where the resume matches the tools.
I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.
But if its use was confined to this use case, pretty much nobody would be using it (unless as a customer of the organization's infra) and barely would be talking about it (like how there isn't too much talk about Borg).
The reason k8s is a thing in the first place is because it's being used by way too many people for their own goods. (Most people having worked in startups have met too many architecture astronauts in our lives).
If I had to bet, I'd wager that 99% of k8s users are in the “spin a few containers to run your web app” category (for the simple reason that for one billion-dollar tech business using it for legit reasons, there's many thousands early startups who do not).
This is why you get many folks over-thinking the solution and picking the most hyped technologies and using them to solve the wrong problems without thinking about what they are selling.
You don't need K8s + AWS EC2 + S3 just to host a web app. That tells me they like lighting money on fire and bankrupting the company and moving to the next one.
Maybe those devops folks only pay attention to k8s clusters and you're flying under their radar with your single debian VM + Kamal. But the same thinking that results in an overtly complex, impossible to debug, expensive to run k8s cluster can absolutely result in the same using regular VMs unless, again, you are just left to your own devices because their policies don't apply to VMs, yet.
The problem usually is you're one mistake away from someone shoving their nose in it. "What are you doing again? What about HA and redundancy? slow rollout and rollback? You must have at least 3 VMs (ideally 5) and can't expose all VMs to the internet of course. You must define a virtual network with policies that we can control and no wireguard isn't approved. You must split the internet facing load balancer from the backend resources and assign different identities with proper scoping to them. Install these 4 different security scanners, these 2 log processors, this watchdog and this network monitor. Are you doing mtls between the VMs on the private network? what if there is an attacker that gains access to your network? What if your proxy is compromised? do you have visibility into all traffic on the network? everything must flow throw this appliance"
And I’m building and happily using Uncloud (https://github.com/psviderski/uncloud) for this (inspired by Kamal). It makes multi-machine setups as simple as a single VM. Creates a zero-config WireGuard overlay network and uses the standard Docker Compose spec to deploy to multiple VMs. There is no orchestrator or control plane complexity. Start with one VM, then add another when needed, can even mix cloud VMs and on-prem.
Radboud University recently announced they're rolling it out for managing containers across the faculty which is the most "serious install" I know about, but there could be other: https://cncz.science.ru.nl/en/news/2026-04-15_uncloud/
Scale vertically until you can't because you're unlikely to hit a limit and if you do you'll have enough money to pay someone else to solve it.
Docker is amazing development tooling but it makes for horrible production infrastructure.
Docker Compose is good for running things on a single server as well.
Docker Swarm and Hashicorp Nomad are good for multi-server setups.
Kubernetes is... enterprise and I guess there's a scale where it makes sense. K3s and similar sort of fill the gap, but I guess it's a matter of what you know and prefer at that point.
Throw on Portainer on a server and the DX is pretty casual (when it works and doesn't have weird networking issues).
Of course, there's also other options for OCI containers, like Podman.
I use k3s/Rancher with Ansible and use dedicated VMs on various providers. Using Flannel with wireguard connects them all together.
This I think is reasonable solution as the main problem with cloud providers is they are just price gouging.
All of this just adds so much extra complexity. If I'm running Amazon.com then sure, but your average app is just fine on a single VM.
If you have actual need to deploy few dozen services all talking with eachother k8s isn't bad way to do it, it has its problems but it allows your devs to mostly self-service their infrastructure needs vs having to process ticket for each vm and firewall rules they need. That is saying from perspective of migrating from "old way" to 14 node actual hardware k8s cluster.
It does make debugging harder as you pretty much need central logging solution, but at that scale you want central logging solution anyway so it isn't big jump, and developers like it.
Main problem with k8s is frankly nothing technical, just the "ooh shiny" problem developers have where they see tech and want to use tech regardless of anything
There are situations where a single VM, no matter how powerful is, can do the job.
I'm not familiar with kubernetes, but doesn't it already do SDN out of the box?
Yes and no. Kubernetes defines specification about network behavior (in form of CNI), but it contains no actual implementation. You have to install the network plugin basically as the first setup step.
It's obvious to you, me and the other 2 presumably techie people who've responded within 15 mins that you shouldn't have been using Kubernetes. But you probably work in a company of full of techie people, who ended up using Kubernetes.
We have HN, an environment full of techie people here who immediately recognise not to use k8s in 99% of cases, yet in actually paid professional environments, in 99% of cases, the same techie people will tolerate, support and converge on the idea they should use k8s.
I feel like there's an element of the emperors new clothes here.
Do you pair it with some orchestration (to spin up the necessary VM)?
Most companies aren't "web scale" ™ and don't need an orchestrator built for google level elasticity, they need a vm autoscaling group if anything.
Most apps don't need such granular control over fs access, network policies, root access, etc, they need `ufw allow 80 && ufw enable`
Most apps don't need a 15 stage, docker layer caching optimized, archive promotion build pipeline that takes 30 minutes to get a copy change shipped to prod, they need a `git clone me@github.com:me/mine.git release_01 && ln -s release_01 /var/www/me/mine/current`
This is coming from someone who has had roles both as a backend product engineer and as a devops/platform engineer, who has been around long enough to remember "deploy" to prod was eclipse ftping php files straight to the prod server on file save. I manage clusters for a living for companies that went full k8s and never should have gone full k8s. ECS would have worked for 99% of these apps, if they even needed that.
Just like the js ecosystem went bat shit insane until things started to swing back towards sanity and people started to trim the needless bloat, the same is coming or due for the overcomplexity of devops/backend deployments
That is not what kube is designed for.
As a devops/cloud engineer coming from a pure sysadmin background (you've got a cluster of n machines running RHEL and that's it) i feel this.
The issues i see however are of different nature:
1. resumeè-driven development (people get higher-paying job if you have the buzzwords in your cv)
2. a general lack of core-linux skills. people don't actually understand how linux and kubernetes work, so they can't build the things they need, so they install off-the-shelf products that do 1000 things including the single one they need.
3. marketing, trendy stuff and FOMO... that tell you that you absolutely can't live without product X or that you must absolutely be doing Y
to give you an example of 3: fluxcd/argocd. they're large and clunky, and we're getting pushed to adopt that for managing the services that we run inside the cluster (not developer workloads, but mostly-static stuff like the LGTM stack and a few more things - core services, basically). they're messy, they add another layer of complexity, other software to run and troubleshoot, more cognitive load.
i'm pushing back on that, and frankly for our needs i'm fairly sure we're better off using terraform to manage kubernetes stuff via the kubernetes and helm provider. i've done some tests and frankly it works beautifully.
it's also the same tool we use to manage infrastructure, so we get to reuse a lot of skills we already have.
also it's fairly easy to inspect... I'm doing some tests using https://pkg.go.dev/github.com/hashicorp/hcl/v2/hclparse and i'm building some internal tooling to do static analysis of our terraform code and automated refactoring.
i still think kubernetes is worth the hassle, though (i mostly run EKS, which by the way has been working very good for me)
> Traditional Cloud 1.0 companies sell you a VM with a default of 3000 IOPS, while your laptop has 500k. Getting the defaults right (and the cost of those defaults right) requires careful thinking through the stack.
I wish them a lot of luck! I admire the vision and am definitely a target customer, I'm just afraid this goes the way things always go: start with great ideals, but as success grows, so must profit.
Cloud vendor pricing often isn't based on cost. Some services they lose money on, others they profit heavily from. These things are often carefully chosen: the type of costs that only go up when customers are heavily committed—bandwidth, NAT gateway, etc.
But I'm fairly certain OP knows this.
Using fio
Hetzner (cx23, 2vCPU, 4 GB) ~3900 IOPS (read/write) ~15.3 MB/s avg latency ~2.1 ms 99.9th percentile ≈ ~5 ms max ≈ ~7 ms
DigitalOcean (SFO1 / 2 GB RAM / 30 GB Disk) ~3900 IOPS (same!) ~15.7 MB/s (same!) avg latency ~2.1 ms (same!) 99.9th percentile ≈ ~18 ms max ≈ ~85 ms (!!)
using sequential dd
Hetzner: 1.9 GB/s DO: 850 MB/s
Using low end plan on both but this Hetzner is 4 euro and DO instance is $18.
RS 1000 G12 AMD EPYC™ 9645 8 GB DDR5 RAM (ECC) 4 dedicated cores 256 GB NVMe
Costs 12,79 €
Results with the follwing command:
fio --name=randreadwrite \ --filename=testfile \ --size=5G \ --bs=4k \ --rw=randrw \ --rwmixread=70 \ --iodepth=32 \ --ioengine=libaio \ --direct=1 \ --numjobs=4 \ --runtime=60 \ --time_based \ --group_reporting
IOPS Read: 70.1k IOPS Write: 30.1k IOPS ~100k IOPS total
Throughput Read: 274 MiB/s Write: 117 MiB/s
Latency Read avg: 1.66 ms, P99.9: 2.61 ms, max 5.644 ms Write avg: 0.39 ms, P99.9: 2.97 ms, max 15.307 ms
IOPS: read 325k, write 139k
Throughput: read 1271MB/s, write 545MB/s
Latency: read avg 0.3ms, P99.9 2.7ms, max 20ms; write: 0.14ms, P99.9 0.35ms max 3.3ms
so roughly 100 times iops and throughput of the cloud VMs
Using a Netcup VPS 1000 G12 is more comparable.
read: IOPS=18.7k, BW=73.1MiB/s
write: IOPS=8053, BW=31.5MiB/s
Latency Read avg: 5.39 ms, P99.9: 85.4 ms, max 482.6 ms
Write avg: 3.36 ms, P99.9: 86.5 ms, max 488.7 ms
Edit: I posted this before reading, and these two are the same he points out.
And yes, IO typically happens in 4kb blocks, so you need a decent amount of IOPS to get the full bandwidth.
If that's true, I wonder if this is a deliberate decision by cloud providers to push users towards microservice architectures with proprietary cloud storage like S3, so you can't do on-machine dbs even for simple servers.
Instead they make the default "meager IOPS" and then charge more to the people who need more.
Business 101 teaches us that pricing isn't based on cost. Call it top down vs bottom up pricing, but the first principles "it costs me $X to make a widget, so 1.y * $X = sell the product for $Y is not how pricing works in practice.
The price is what the customer will pay, regardless of your costs.
It kinda is, but obscured by GP's formula.
More simply; if it costs you $X to produce a product and the market is willing to pay $Y (which has no relation to $X), why would you price it as a function of $X?
If it costs me $10 to make a widget and the market is happy to pay $100, why would I base my pricing on $10 * 1.$MARGIN?
I see the same thing happen with Kubernetes. I've run clusters from various sizes for about half a decade now. I've never once had an incident that wasn't caused by the product itself. I recall one particular incident where we had a complete blackout for about an hour. The people predisposed to hating Kubernetes did everything they could to blame it all on that "shitty k8s system." Turns out the service in question simply DOS'd by opening up tens of thousands of ports in a matter of seconds when a particular scenario occurred.
I'm neither in the k8s is the future nor k8s is total trash. It's a good system for when you genuinely need it. I've never understand the other two sides of the equation.
Usually they go hand in hand.
By the time I left, the developers didn't really know anything about how the underlying infrastructure worked. They wrote their Dockerfiles, a tiny little file to declare their deployment needs, and then they opened a platform webpage to watch the full lifecycle.
If you're a single service shop, then yeah, put Docker Compose on it and run an Ansible playbook via GitHub Actions. Done. But for a larger org moving off cloud to bare-metal, I really couldn't see not having k8s there to help buffer some of the pain.
I agree that Kubernetes can help simplify the deployment model for large organizations with a mature DevOps team. It is also a model that many organizations share, and so you can hire for talent already familiar with it. But it's not the only viable deployment model, and it's very possible to build a deployment system that behaves similarly without bringing in Kubernetes. Yes, including automatic preview deployments. This doesn't mean I'm provided a VM and told to figure it out. There are still paved-path deployment patterns.
As a developer, I do need to understand the environment my code runs in, whether it is bare-metal, Kubernetes, Docker Swarm, or a single-node Docker host. It impacts how config is deployed and how services communicate with each other. The fact that developers wrote Dockerfiles is proof that they needed to understand the environment. This is purely a tradeoff (abstracting one system, but now you need to learn a new one.)
It's up to the individual to choose how much knowledge they want to trade away for convenience. All the containers are just forms of that trade.
You surely meant "much less efficient than"
There also seems to be confusion about what I meant by "bare-metal." I wasn't intending to refer to the server ownership model, but rather the deployment model where you deploy software directly onto an operating system.
When all you have is a hammer, every problem starts to look like a nail. And the people with axes are wondering how (or indeed even why) so many people are trying to chop wood with a hammer. Further, some axewielders are wondering why they are losing their jobs to people with hammers when an axe is the right tool for the job. Easy to hate the hammer in this case.
And the end result is often that you have two tribes that have totally incorrect idea of even what tools they are using themselves and how, and it's like you swapped them an intentionally wrong dictionary like in a Monthy Python sketch.
We run k8s with several VMs in a couple different cloud providers. I’d love it if I could forget about the VMs entirely.
Is there a simpler thing than k8s that gets you all that? Probably. But if you don’t use k8s, aren’t you doomed to reimplement half of it?
Like these things:
- Service discovery or ingress/routing (“what port was the auth service deployed on again?”)
- Declarative configuration across the board, including for scale-out
- Each service gets its own service account for interacting with external systems
- Blue/green deployments, readiness checks, health checks
- Strong auditing of what was deployed and mutated, when, and by whom
I ended up buying a cheap auctioned Hetzner server and using my self-hostable Firecracker orchestrator on top of it (https://github.com/sahil-shubham/bhatti, https://bhatti.sh) specifically because I wanted the thing he’s describing — buy some hardware, carve it into as many VMs as I want, and not think about provisioning or their lifecycle. Idle VMs snapshot to disk and free all RAM automatically. The hardware is mine, the VMs are disposable, and idle costs nothing.
The thing that, although obvious, surprised me most is that once you have memory-state snapshots, everything becomes resumable. I make a browser sandbox, get Chromium to a logged-in state, snapshot it, and resume copies of that session on demand. My agents work inside sandboxes, I run docker compose in them for preview environments, and when nothing’s active the server is basically idle. One $100/month box does all of it.
Out of interest, what sandboxing solution do you use?
There is already so much software out there, which isn't used by anyone. Just take a look at any appstore. I don't understand why we are so obsessed with cranking out even more, whereas the obvious usecase for LLMs should be to write better software. Let's hope the focus shifts from code generation to something else. There are many ways LLMs can assist in writing better code.
I believe right now we are still in the phase of “how can AI help engineers write better software”, but are slowly shifting to “how can engineers help AI write better software.” This will bring in a new herd of engineers with completely different views on what software is, and how to best go about building computer interactions.
Jevons paradox would be if despite software becoming cheaper to produce the total spend on producing software would increase because the increase in production outruns the savings
Jevons paradox applies when demand is very elastic, i.e. small changes in price cause large changes in quantity demanded. It's a property of the market.
I honestly think this is ideal. Video games aside, I think one day we'll look back and realize just how insane it was that we built software for millions or even billions of users to use. People can now finally build the software that does exactly what they've wanted their software to do without competing priorities and misaligned revenue models working against them. One could argue this kind of software, by definition, is higher quality.
I could see maybe more customization of said software, but not totally fresh. I do agree that people will invent more one-off throwaway software, though.
My view is actually the opposite. Software now belongs to cattle, not pet. We should use one-offs. We should use micro-scale snippets. Speaking language should be equivalent to programming. (I know, it's a bit of pipe dream)
In that sense, exe.dev (and tailscale) is a bit like pet-driven projects.
As for the average quality: it’s unclear.
My intuition is that agents lift up the floor to some degree, but at the same time will lead to more software being produced that’s of mediocre quality, with outliers of higher quality emerging at a higher rate than before.
If you're doing anything complicated, Excel just doesn't make sense anymore. it'll still the be data exchange format (at least, something more advanced than csv), but it's no longer the only frontend.
"No one uses" is no longer the insult it once was. I don't need or want to make software for every last person on the world to use. I have a very very small list of users (aka me) that I serve very well with most of the software that I generate these days outside of work.
It certainly is for lots of businesses, otherwise they go out of business.
There is something called 'revenue' which they need to make from customers which are their 'users', and that revenue pays for the 'operating costs' which includes payroll, office rent, infrastructure etc.
This just means that it is important than ever to know what to build just as how it is built. It is unrealistic for a business to disregard that and to build anything they want and end up with zero users.
No users, No revenue. No revenue, No business.
I agree there is opportunity in making LLM development flows smooth, paired with the flexibility of root-on-a-Linux-machine.
> Time and again I have said “this is the one” only to be betrayed by some half-assed, half-implemented, or half-thought-through abstraction. No thank you.
The irony is that this is my experience of Tailscale.
Finally, networking made easy. Oh god, why is my battery doing so poorly. Oh god, it's modified my firewall rules in a way that's incompatible with some other tool, and the bug tracker is silent. Now I have to understand their implementation, oh dear.
No thank you.
I hope this wasn't interpreted towards exe.dev. That really is a cool service!
Could you rephrase that / elaborate on that? Isn't Tailscale's selling point precisely that they do identity-based networking?
EDIT: Never mind, now I see the sibling comment to which you also responded – I should have reloaded the page. Let's continue there!
Tags permanently erase the user identity from a device, and disable things like Taildrop. When I tried to assign a tag for ACLs, I found that I then could not remove it and had to endure a very laborous process to re-register a Tailscale device that I added to Tailscale for the express purpose of remotely accessing
I think that's startup-thinking, at least in my experience. Maybe in a small company the DevOps guy does all infra.
In my experience, especially in financial services, who runs the show are platform engineering MDs - these people want maximum control for their software engineers, who they split up into a thousand little groups who all want to manage their own repos, their own deployments, their own everything. It's believed that microservices gives them that power.
I guarantee you devops people hate complexity, they're the ones getting called at night and on the weekend, because it's supposedly always an "infrastructure issue" until proven otherwise.
Also the deployment logs end up in a log aggregation system, and god forbid software developers troubleshoot their own deployments by checking logs. It's an Incident.
Are microservices a past fad yet?
Everything which cloud companies provide just cost so much, my own postgres running with HA setup and backup cost me 1/10th the price of RDS or CloudSQL service running in production over 10 years with no downtime.
i directly autoscales instances off of the Metrics harvested from graphana it works fine for us, we've autoscaler configured via webhooks. Very simple and never failed us.
i don't know why would i even ever use GCP or AWS anymore.
All my services are fully HA and backup works like charm everyday.
Does a regular 20-something software engineer still know how to turn some eBay servers & routers into a platform for hosting a high-traffic web application? Because that is still a thing you can do! (I've done it last year to make a 50PiB+ data store). I'm genuinely curious how popular it is for medium-to-big projects.
And Hetzner gives you almost all of that economic upside while taking away much of the physical hassle! Why are they not kings of the hosting world, rather than turning over a modest €367M (2021).
I find it hard to believe that the knowledge to manage a bunch of dedicated servers is that arcane that people wouldn't choose it for this kind of gigantic saving.
Managing servers is fine. Managing servers well is hard for the average person. Many hand-rolled hosting setups I've encountered includes fun gems such as:
- undocumented config drift.
- one unit of availability (downtime required for offline upgrades, resizing or maintenance)
- very out of date OS/libraries (usually due to the first two issues)
- generally awful security configurations. The easiest configuration being open ports for SSH and/or database connections, which probably have passwords (if they didn't you'd immediately be pwned)
Cloud architecture might be annoying and complex for many use-cases, but if you've ever been the person who had to pick up someone else's "pet" and start making changes or just maintaining it you'll know why the it can be nice to have cloud arch put some of their constraints on how infra is provisioned and be willing to pay for it.
Whether or not cloud is viable for a company is very individual. It's very hard to pin point a size or a use case that will always make cloud the "correct" choice.
OP is not saying they push new versions at such a high frequency they need checks every one minute.
The choice of one minute vs 15 minute is implementation detail and when architected like this costs nothing.
I hope that helps. Again this is my own take.
But I came across Mythic Beasts (https://www.mythic-beasts.com/) yesterday, similar idea, UK based. Not used them yet but made the account for the next VPS.
You can use block storage if data matters to you.
Many services do not need to care about data reliability or can use multiple nodes, network storage or many other HA setups.
But there is middleground in form of VPS, where hardware is managed by the provider. It's still way way cheaper than some cloud magic service.
I am sure it's luck but we have few hetzner VPSes in both German locations and in last 5 years afaik they've never been down. On our http monitor service they have 100s of days uptime only because we restarted them ourselves.
An employee is going to cost anywhere between 8k and 50k per month. Hiring an employee to save 200/month on servers by using a shitty VPS provider is not saving you any money.
If you're looking to invest im fine with only $5M :)
Running Shellbox 24/7 is ~25% cheaper than Exe, with 2x storage but 50% of RAM. Exe seems to provide additional features (which I don't need). Not presenting this information upfront and in an easily digestible format makes me suspicious.
I dig the overall aesthetic and may give Shellbox v2 a try.
I don't want to make that public, it's my way of an isolated dev environment and it runs on my private raspberry behind my tv. Costs me nothing.
I hope you have a good success with your service.
`ssh you/repo/branch@box.clawk.work` → jump directly into Claude Code (or Codex) with your repo cloned and credentials injected. Firecracker VMs, 19€/mo.
POC, please be kind.
If you want to try it: code `HNPRELAUNCH` on checkout, first month free, then 19€/mo (cancel anytime from your Stripe receipt). Limited to the first 20 redemptions, expires in a week.
Honest feedback on what breaks would mean a lot.
at 19€/mo are you subsidizing it given the sharp rise of LLM costs lately?
or are you heavily restricting model access. surely there is no Opus?
Just shows I'm the Dropbox commentator. I have what exe provides on my own and am shocked by the value these abstractions provide everyone else!! One off containers on my own hardware spin up spin down run async agents, etc, tailscale auth, team can share or connect easily by name.
The technology itself in its current form is not valuable
https://github.com/hetzneronline/community-content/blob/mast...
It also has a CLI, hcloud. Am I getting any value with exe.dev I couldn't get with an 80 line hcloud wrapper?
For agents, declarative plans are still valuable because they are reviewable. The interesting question is whether exe.dev changes the primitive: resource pools for many isolated VM-like processes, or just nicer VPS provisioning.
Not sure we can move away from cpu/memory/io budgeting towards total metal saturation because code isn't what it used to be because no one handles malloc failure any more, we just crash OOM
The key point is the partner companies. Almost nobody is actually running their own clouds the way they would with various 365 products, AWS or Azure. They buy the cloud from partners, similar to how they used to (and still do) buy solutions from Microsoft partners. So if you want to "sell cloud" you're probably going to struggle unless you get some of these onboard. Which again would probably be hard because I imagine a lot of what they sell is sort of a package which basically runs on VM's setup as part of the package that they already have.
International visitors might tell us more about benefits of non EU, US or UK nexus companies/legal/rights.
Lean software -> missing features users want -> add features over time -> bloated mess -> we need a smaller rewrite -> Lean software -> ...
> Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud. But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization. Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow. Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend. It is tempting to dismiss Kubernetes as a scam, artificial make work designed to avoid doing real product work, but the truth is worse: it is a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.
Please learn from Unix's mistakes. Learn from Nix. Support create-before-destroy patterns everywhere. Forego all global namespaces you can. Support rollbacks everywhere.
If any cloud provider can do that, cloud IaC will finally stop feeling so fake/empty compared to a sane system like NixOS.
Fine, their UI is different, but I don't see any real difference from other providers.
On that machine you can (easily) make an arbitrary number of VMs.
Each VM has their own URL that you can share (or make private).
See features: https://exe.dev/docs/customization
Checking the current offering, it's just prepaid cloud-capacity with rather low flexibility. It's cheap though, so that is nice I guess. But does this solve anything new? Anything fly.io orso doesn't solve?
What is the new idea here? Or is it just the vibes?
As another user notes in this thread, exe.dev isn't that cheap. Their bandwidth pricing is £7/100gb. The lowest compute tier is £20/mo (Fly.io machines/sprites can go for less than £2/mo).
> Anything fly.io also doesn't solve?
exe.dev is comparable to sprites.dev Fly.io launched recently; but with a different pricing model.
David, by the way of Tailscale, themselves were among early users of Fly.io. I read some of David's commentary on "Cloud 1.0" as taking a dig at their friends at Fly.io, too. This is going to be interesting...
We're thinking about switching to this pricing model for our own startup[1] (we run sandboxed coding agents for dev teams). We run on Daytona right now for sandboxes. Sometimes I spin up a sandboxed agent to make changes to an app, and then I leave it running so my teammate can poke around and test the running app in the VM, but each second it's running we (and our users) incur costs.
We can either build a bunch of complicated tech to hibernate running sandboxes (there's a lot of tricky edge cases for detecting when a sandbox is active vs. should be hibernated) or we can just provision fixed blocks of compute. I think I prefer the latter.
The shell command to start a new vm, has a --prompt flag to get an LLM to configure the VM for you.
VM's have no public ipv4 IP, and the ipv6 IP doesn't seem to allow incoming connections.
The only supported inbound connections are via their HTTP proxy.
There is no private networking.
At first I interpreted the complaint about cloud providers not offering nested-virtualization, as something he intends to address by offering it as a feature, but no, instead he means that exe.dev's VM abstraction eschews the need for it.
I'm very curious how they deal with subscription levels/noisy neighbors.
Oh, that’s too kind. More like 100x to 1000x. Raw bandwidth is cheap.
Another one could be Bitwarden, although I don't host my own password manager personally. Or netbird. You get the point
VMs have a built-in gateway to cloud providers with a fixed url with no auth. You can top that in via the service itself. No need for your own keys.
So likely a good tool for managing AI agents. And "cloud" is a bit of a stretch, the service is very narrow.
The complete lack of more detailed description of the regions except city name makes it really only suitable for ephemeral/temporary deployments. We don't know what the datacenters are, what redundancy is in place, no backups or anything like that.
Running a cloud data center could be a business like operating a self-storage facility or a car wash. Small investors love this kind of operation.
You can see their base docker image here - https://github.com/boldsoftware/exeuntu
52.35.87.134 <- Amazon Technologies Inc. (AT-88-Z)
Hey wait a minute!
In my experience, K8s is a million times better than legacy shit it is usually replacing. The Herokus, the Ansible soup, the Chef/Puppet soup before that etc. The legacy infra that was held together by glue and sweat that everybody was afraid to touch.
Human nature, really.
* Insistence on adding costly abstractions to overcome the limitations of non-fungible resources
* Deliberate creation of over or under-sized resource "pieces" instead of letting folks consume what they need
* Deliberate incompatibility with other vendors to enforce lock-in
I pitched a "Universal Cloud" abstraction layer years ago that never got any traction, and honestly this sounds like a much better solution anyhow. When modern virtualization is baked into OS kernels, it doesn't make a whole lot of sense to enforce arbitrary resource sizes or limits other than to inflate consumption.
Kubernetes without all the stuff that makes it a bugbear to administrate, in other words. Let me buy/rent a pool of stuff and use it how I see fit, be it containers or VMs or what-have-you.
dedicated servers, as hinted by others here, addresses the vast majority of issues one may face for any non-enterprise needs. if you know about IOPS and care about them, odds are that running a simple open-source project [1] on top of one is all you need to do to move on with your day.
need redundancy, etc.? can complement with another one in another provider/region or put CF in front of your box. this is clearly working well enough for some of the commenters who are able to sell their own service on top of this approach.
I don’t care about how the backend works. Superbase requires magical luck to self host.
A lot of cloud providers have very generous free tiers to hook you and then the moment things take off , it’s a small fortune to keep the servers on.
One thing I'm confused with is how to create a shared resources like e.g. a redis server and connect to it from other vms? It looks now quite cumbersome to setup tailscale or connect via ssh between VMS. Also what about egress? My guess is that all traffic billed at 0.07$ per GB. It looks like this cloud is made to run statefull agents and personal isolated projects and distributed systems or horizontal scaling isn't a good fit for it?
Also I'm curious why not railway like billing per resource utilization pricing model? It’s very convenient and I would argue is made for agents era.
I did setup for my friends and family a railway project that spawns a vm with disk (statefull service) via a tg bot and runs an openclaw like agent - it costs me something like 2$ to run 9 vms like this.
I've found the quality and simplicity to be an attractive solution for lazy devops when I need to reach for a second computer
The main reason clouds offer network block devices is abstraction.
Starting a digital ocean droplet is a single curl call. Starting a hetzner server is as well. Their api’s are completely fine and known to llm’s.
Why would agents learn exe’s way of setting up / deploying / binding to ports / auth, rather than just ssh’ing into a vm..?
> Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud.
K8s's main function isn't to paint over existing cloud APIs, that is just necessity when you deploy it in cloud. On normal hardware it's just an orchestration layer, and often just a way to pass config from one app to another in structured format.
> But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization.
Man discovered system designed for containers is good with containers, not VMs. More news at 10
> Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow.
Ignorance. k8s have abstractions over a bunch of types of storage, for example using Ceph as backend will just use KVM's Ceph backend, no extra overhead. It also supports "oldschool" protocols used for VM storage like NFS or iSCSI. It might be slow in some cases for cloud if cloud doesn't provide enough control, but that's not k8s fault.
> Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend.
He mistakes cloud problems with k8s problems(again). All k8s needs is visibility between nodes. There are multiple providers to achieve that, some with zero tunelling, just routing. It's still complex, but no more than "run a routing daemon".
I expect his project to slowly reinvent cloud APIs and copying what k8s and other projects did once he starts hitting problems those solutions solved. And do it worse, because instead of researching of why and why not that person seems to want to throw everything out with learning no lessons.
Do not give him money
EC2 provides the *d VMs that have SSDs with high IOPS at much lower cost than network SSDs. They are ephemeral, but so is laptop and its SSD - it can loose the data. From AWS docs "If you stop, hibernate, or terminate an instance, data on instance store volumes is lost.".
(Percentages cited above are tongue-in-cheek, actual numbers are probably different)
A service offering VMs for $20 is a long way from AWS, but I see how it makes sense as a first step. AWS also started with EC2, but in a completely different environment with no competition.
But I don't want to be either of those customers. It means the whole system has an extra layer of abstraction, so they can juggle VMs around. It's why you need slow EBS instead of just getting a flash drive in the same case as the CPU, with 0.01x the latency.
The key to scaling up is to have big-enough hardware on the backend. If Hetzner is renting out bare metal instances then they can only rent out the sizes that they have. If a cloud provider invests in really big single systems, they can offer fractions of those systems to multiple tenants, some of whom scale up to use the entire system, and some who don't. I think that is a win-win.
A fractional VM is also a fungible VM. If the tenant calls to spin up a certain size VM, then the backend can find suitable hardware for it from a menu of sizes. Smaller VMs can slot in anywhere there is room, not just on a designated bare-metal system.
A cloud provider is always going to want to maximize their rack space, wattage/heat, and resource usage. So they will invest in high-density systems at every chance. On the other hand, cloud tenants will have diverse needs, including some fraction of those big computers.
"That must be worst website ever made"
Made me love the site and style even more
Cloud is bad?
every time i've had an issue or question, it's been the same sympathetic people helping me out. over email, in plain text.
Then I started to realize most people who complain are rolling their own which is also not bad since there are products like k3s that are very simple to use.
It seems things start to fall apart when they try to stuff it with all kinds of crazy idiotic controllers and the favorite of the month CNI and CSI. I always shake my head when I see people creating sand castles by setting up stuff like Ceph from within the cluster.
If you want to play with it keep things simple and have all the persistent data outside of the cluster. Use good old NFS instead of the latest longceph horngluster version. Keep databases and the container registry out. Treat it like a compute pool not a virtual datacenter. Stop recursing chickens inside eggs.
> The standard price for a GB of egress from a cloud provider is 10x what you pay racking a server in a normal data center.
From the exe.dev pricing page:
> additional data transfer $0.07/GB/month
So at least on the network price promise they don't seem to deliver, still costs an arm and a leg like your neighbourhood hyperscaler.
Overall service looks interesting, I like simplicity with convenience, something which packet.net deliberately decided not to offer at the time.
if we go back to the principle that modern computers are really fast, SSDs are crazy fast
and we remove the extra cruft of abstractions - software will be easier to develop - and we wouldn't have people shilling 'agents' as a way for faster development.
ultimately the bottleneck is our own thinking.
simple primitives, simpler thinking.
One of my friends was told to come to a sex party that was all male and he is straight. It soured his relationship with the firm so much he ended up winding down the business.
but i know nothing about what the comment says, just answering your question.
Jokes aside: - k8s is insane peace of software. A right tool for a big problem. Not for your toys. Yes, it is crazy difficult to setup and manage. Then what?
- cloud has bad and slow disk. BS. They have perfectly fast NVME.
Something else? That’s it.
Why I am so confident? I used to setup and manage kubernetes for 2 years. I have some experience. Do I use it more? Nope. Not a right tool for me. Ansible with some custom Linux tools fits better for Me.
I also build my own cloud. But if I say it less loud: hosting to host websites for https://playcode.io. Yea, it is hard and with a lot of compromises. Like networking, yes I want to communicate between vms in any region. Or disks and reliability. What about snapshots? And many bare metal renters gives only 1Gbt/s. Which is not fine. Or they ask way more for 10Gbt uplink. So it is easy to build some limited and unreliable shit or non scalable.
>One price, no surprises. You get 2 CPUs, 8 GB of RAM, and 25 GB of disk—shared across up to 25 VMs.
This might sounds like a good thing compared to the current state of clouds, but what’s better than that is having your own. The other day I got a used optiplex for $20, it had 2TB hdd, 265gb ssd, 16gb, and corei7. This is a one time payment, not monthly. You can setup proxmox, have dozens of lxc and vm, and even nest inside them whatever more lxc too, your hardware, physically with you, backed up by you, monitored by you, and accessed only by you. If you have stable internet and electricity, there’s really no excuse not to invest on your own hardware. A small business can even invest in that as well, not just as a personal one. Go to rackrat.net and grab a used server if you are a business, or a good station for personal use.
> That must be worst website ever made.
the level of confidence (this is a second time founder after all) to put that on their website gives me confidence that they can make this work
"In some tech circles, that is an unusual statement. (“In this house, we curse computers!”) I get it, computers can be really frustrating. But I like computers. I always have. It is really fun getting computers to do things. Painful, sure, but the results are worth it. Small microcontrollers are fun, desktops are fun, phones are fun, and servers are fun, whether racked in your basement or in a data center across the world. I like them all."
The reality: Everyone reading his blog or this HN entry loves computers.
- I'm building a server farm in my homelab.
- I'm doing a small startup to see if this idea works.
- We're taking on AWS by being more cost effective. Funding secured.
I like the way you can tell it what you want and it makes it. Very cool.
Perhaps the VM idea is old. The unit is a worker encapsulated in some deployable container.
In the world of Cloudflare workers - especially durable objects that are guaranteed to have one of them running in the world with a tightly bound database.
The way I think of apps has changed.
My take is devs want a way to say “run this code, persist this info, microsecond latency, never go down, scale within this $ budget”
It’s crazy how good a deal $5/mo cloudflare standard plan is.
Obviously many startups raise millions and they gotta spend millions.
However the new age of scale to zero, wake up in millisecond, process the request and go back to sleep is a new paradigm.
Vs old school of over provision for max capacity you will ever need.
Google has a similar, scale to zero container story but their cold startup time is in seconds. Too slow.
And what it has to do with the "cloud"? Cloud means one use cloud-provided services - security, queue, managed database, etc. and that's their selling point. This exe.dev is a bare server where I can install what I want, this is fine, but this is not a cloud and, frankly speaking, nothing new.
Is there a name for this style of writing? I come across it regularly.
I'd describe it as forcefully modest, "I'm just a simple guy" kind of thing. With a dash of "still a child on the inside". I always picture it as if the guy from the King of Queens meme wrote it.
"I guess I'm just really into books, heh" - Bezos (obviously non-real, hypothetical quote, meant to illustrate the concept)
This style is also very prevalent in Twitter bios.
Since it's a "literary" style that is quite common, I'm sure it has been characterized and named.
GPT says it's "aw-shucks", but I think that's a different thing.
If you want to run a website in the cloud, you start with an API, right? A CRUD API with commands like "make me a VPC with subnet 1.2.3.4/24", "make me a VM with 2GB RAM and 1 vCPU", "allow tcp port 80 and 443 to my VM", etc. Over time you create and change more things; things work, everybody's happy. At some point, one of the things changes, and now the website is broken. You could use Terraform or Ansible to try to fix this, by first creating all the configs to hopefully be in the right state, then re-running the IaC to re-apply the right set of parameters. But your website is already down and you don't really want to maintain a complex config and tool.
You can't avoid this problem because the cloud's design is bad. The CRUD method works at first to get things going. But eventually VMs stop, things get deleted, parameters of resources get changed. K8s was (partly) made to address this, with a declarative config and server which constantly "fixes" the resources back to the declared state. But K8s is hell because it uses a million abstractions to do a simple thing: ensure my stuff stays working. I should be able to point and click to set it up, and the cloud should remember it. Then if I try to change something like the security group, it should error saying "my dude, if you remove port 443 from the security group, your website will go down". Of course the cloud can't really know what will break what, unless the user defines their application's architecture. So the cloud should let the user define that architecture, have a server component that keeps ensuring everything's there and works, and stops people from footgunning themselves.
Everything that affects the user is a distributed system with mutable state. When that state changes, it can break something. So the system should continuously manage itself to fix issues that could break it. Part of that requires tracking dependencies, with guardrails to determine if a change might break something. Another part requires versioning the changes, so the user (or system) can easily roll back the whole system state to before it broke. This abstraction is complicated, but it's a solution to a complex problem: keeping the system working.
No cloud deals with this because it's too hard. But your cloud is extremely simple, so it might work. Ideally, every resource in your cloud (exe.dev) should work this way. From your team membership settings, to whether a proxy is public, the state of your VM, your DNS settings, the ssh keys allowed, email settings, http proxy integration / repo integration settings / their attachments, VM tags & disk sizes, etc. Over time your system will add more pieces and get more complex, to the point that implementing these system protections will be too complex and you won't even consider it. But your system is small right now, so you might be able to get it working. The end result should be less pain for the user because the system protects them from pain (fixing broken things, preventing breaking things), and more money for you because people like systems that don't break. But it's also possible nobody cares about this stuff until the system gets really big, so maybe your users won't care. It would be nice to have a cloud that fixes this tho.
> $160/month
50 VM
25 GB disk+
100 GB data transfer+
100GB/mo is <1mbps sustained
lmaoThese are nice declarative statements but have almost no meaningful substance.
> Setup scripts have a maximum size. Use indirection. [What's the maximum size?] > Shelley is a coding agent. It is web-based, works on mobile. [Cool model bro. Any details you want to share?]
> $20 a month
2025 or 2005, what's the difference?
For that money I can get 5 big bare metal boxes on OVH with fast SSDs, put k0s on them, fast deploy with kluctl, cloudflare tunnels for egress. Backups to a cheap S3 bucket somewhere. I'll never look at another cloud provider.