It’s so cheap[1] to start and stop servers on demand that I’ve decided to give “away” servers for free. I wrote a little proxy in Go that detects minecraft login requests and starts a server with the specific world. After a dropped connection I stop it.
[1] For 15€/month you can have ~30 servers running in parallel and thousands of powered down worlds. https://contabo.com/en/vps/
1) Initial world creation is quite slow - 10-15 seconds on a moderately powerful hardware. Since I want first joins to be as fast as possible, I keep 1000 pre-generated worlds and one of them is chosen randomly to be used as template on your first login.
2) In addition to login packets, minecraft clients send a ping packet to check if the server is online. I forge a valid response because I don't want to start a server just so you can see "server is up, 0 players online".
You could take this a step further by "decaying" the bases in some way (remove torches, remove some large percent of the items in chests, add vines, weather rock, move blocks from the ceiling to the floor, etc)
But that's my skepticism talking.
As for the skins - servers run in "offline mode", which means no communication with the Microsoft authentication API responsible for validating accounts and (I believe) giving skins to players.
So 16GB of ram should be enough for 20-60 servers.
Then there's an audit, you're found non-compliant, and now they own your house.
Oracle, 2021.
Kids played for a couple of hours last night without issue.
https://hitrov.medium.com/resolving-oracle-cloud-out-of-capa...
Yes, there is no point to this, but if it's free... ?
Here's the Anandtech review of the Ampere Altra, which is what Oracle is serving these VMs from: https://www.anandtech.com/show/16315/the-ampere-altra-review...
The TLDR:
"The Altra’s strengths lie in compute-bound workloads where having 25% more cores is an advantage. The Neoverse-N1 cores clocked at 3.3GHz can more than match the per-core performance of Zen2 inside the EPYC CPUs.
There are still workloads in which the Altra doesn’t do as well – anything that puts higher cache pressure on the cores will heavily favours the EPYC as while 1MB per core L2 is nice to have, 32MB of L3 shared amongst 80 cores isn’t very much cache to go around."
I'm sure in GCP's Always Free tier the server they offer only has 0.5GB RAM.
Opinion: They're losing badly in the Cloud Wars and need to scrape together some sort of customer base in any way they can, even if it means burning money.
I had never heard of this architecture before; a pretty creative way of doing Heroku-like scale-to-zero at nearly no cost on AWS.
> Fargate launches two containers, Minecraft and a watchdog
I'd love to see a cost analysis between running the "watchdog" as a Fargate container versus another lambda function. Even having a lambda function run once every 5 minutes 24/7 would trigger ~15,000 invocations a month, which is in the realm of "near Free".
If there was some way to trigger the scale-down event from there, it would reduce the expensive part of this setup (Fargate) even further. Though, granted; given both containers are packed into the same Fargate VM, it would really only mean freeing up some additional resources for the Minecraft server.
It looks like the watchdog is simply checking for connections on a port, which is probably too low-level to handle with lambda. But, an architecture like this could work in a ton of services, and if you had e.g. an ALB set up in front of the services, one could use the lambda to scan incoming request metrics and scale down on that.
Not at all. You could easily check that with any of Lambda's supported languages.
The person that set this up got an amazing education on use of real-world AWS services.
A lot of IT people aren't aware that things like this exist. They think moving to the cloud means sending all your virtual servers to your provider of choice and running them 24x7 like you did on-prem. In my opinion it's more about architecting solutions so that resources pop into existence for the exact # of milliseconds they're needed and then they're released. This is a clever step along that path.
The vast majority of use cases are better off with variable resource availability. Unless you're doing something akin to mining cryptocurrency 24x7x365 most workloads are variable to some degree.
So maybe instead of one giant server that processes requests you use a single small server that is available 24x7x365. Then if your workload increases at 8 am you use an autoscaling group to spin up 3 more. Then at 5 pm it goes back down to 1. And maybe you have a batch process that kicks off at 2 am every night so you spin up 4 servers to process requests. This is just one example so it's important not to focus on it and respond with, "Well what about x!" AWS has many ways to fulfill the promise of accomplishing tasks with minimal resources.
And all of this is just a step on the path to serverless computing with things like Lambda and DynamoDB or serverless RDS.
https://observablehq.com/@tomlarkworthy/minecraft-servers-be
it never really took off so I mothballed it, however, I do use it at home for our personal server and it has saved me a ton of money! It makes perfect sense as you can have quite a good spec machine when you are paying by the hour. you just disconnect the disk from the VM and pay for disk storage which is very cheap.
It was based on the following terraform recipe (which I wrote)
https://github.com/futurice/terraform-examples/blob/master/g...
Edit: Just saw that the GitHub includes a link to an AWS calculator. Looks like a month of continuous usage caps out at $40-ish. Not too bad since my realistic worst case is probably more like 8/hrs per day rather than the full 24.
They do but, afaik, there's no "spin up and down" ones that charge you for usage; they're all "$X per month" fixed cost.
(Although looking at the costs these days, they're not that much higher than this would cost you for even a medium-sized world.)
This is PHENOMENALLY DOCUMENTED. I am thoroughly impressed, @doctorray. Clear and easy to follow walkthrough and explanation of how it works, amazing troubleshooting tips, suggestions for managing it... This is an exemplar of a well-made README for a service. Bravo!
Do you just stop playing when it happens ?.
You just turn that msg into a in-game countdown.
I always wanted to go after an auto-switch style system but never got that far.
https://www.shogan.co.uk/gaming/cheap-minecraft-server-in-aw...
Wondering if services like Google or Shodan may have tried querying it and causing your server to turn on?
In the 2 months I've been using this method before deciding to write it all down, I've not run into any issues with anyone else or any bots triggering the container to start, at least not yet...
I just wish there weren't so many steps to get this kind of thing running! Even with automation it's still a LOT - getting this running myself would take me a few hours, and I have prior relevant experience.
A regular non-software-industry-professional parent has little chance.
I really wish there were better ways to make AWS stuff like this available for people to use without requiring them to have deep knowledge of how to work with different aspects of AWS.
I wish AWS would provide some kind of interface where I can redirect a regular human being to easy-deploy.aws.com/?cloudformation=url-to-my-cloud-formation and they would be presented with a human-readable form that tells them what it will do, sets a hard limit on how much money it will be able to burn through (for protection against crypto-currency mining scams), enter their credit card details and click "Deploy" to start using it.
But in the background, it's run on a set of Amazon services. You don't have to rent a specific server for a given time period, like monthly server rental.
You just use Amazon's on-demand services (that use whichever server resources are required at the time).
Considering all the hours I spent looking for ways to do exactly this when I was 12-15... I don't doubt I would've gone through all the trouble and even learned some AWS along the way.
Back in those days the only way I could get a free server was by hosting a phpBB forum on 000webhost and somehow convincing a VPS provider to "sponsor our forum". They'd get a massive banner ad and I'd get a free server to play around with. The good days!
But the difference between a couple bucks a month and $5 once you actually have the ability to pay for stuff online does seems pretty negligible.
The cheap VPS's absolutely do not allow you to pin the CPU to 100% usage for a significant amount of time since that messes up the provisioning. A Minecraft server will definitely pin the CPU to 100%.
What happens is that your process will be killed repeatedly.
A $5 VPS is great for simple site hosting and a small amount of CPU workload. They do not work at all for any type of game server.
>As long as you don’t go to 100% CPU usage for a long period of time, everything will be okay. DigitalOcean are doing pro active monitoring and will see if your droplet is having 100% CPU usage all the time and may limit the CPU capacity of the droplets displaying this behavior. Since each droplet shares physical hardware with other droplets, constant 100% CPU use degrades the service quality for other users on the same node.
Note that a game server will go to 100%. It will be killed.
https://www.digitalocean.com/community/questions/cpu-usage-l...
What you describe has never happened to me. Have I just been lucky?
I did this with a Minecraft plugin that would schedule a systemd shutdown in 30 minutes when the last player disconnects, and cancel the shutdown if a player connects.
Then a simple webpage that sent an EC2 API request to power on the instance, and a simple plugin that sends a Telegram message when the server is ready for connections.
You send the EC2 API request directly from a public facing website?
This qualifies as "serverless" now?
A lot less chance of me spending $$ that way.
Overall I personally prefer a VPS or dedicated server but I don't think comparing it like you are is 100% fair.
I don't even bother with playit.gg - just forward a couple ports on the router and pass out my ip. Only time my dynamic ip changes is when I lose power, and if I've lost power the server is down for "Maintenance" anyways.
Concerned about cost overruns?
Set up a Billing Alert! You can get an email if your bill exceeds a certain amount. Set it at $5 maybe?
It's 2021 and the biggest cloud platforms still don't have hard limits on spending.However the reason it doesn’t exist I suspect is twofold. Firstly because it is bad business. All the cloud providers make a lot of money from mistakes and small things sapping cash. Secondly it’s hard to rationalise what to do when the budget runs out. What do they nuke?
If threshold (x) hit then do:
- Email me
- Stop Servers XYZ
- Leave Servers ABC running.
If threshold (y) hit then do:
- Email me / Call me
- Shut everything down.
So switch off all VMs, but don’t delete the disks. Disable S3 read/write, but don’t delete the data. Etc…
Doing nothing is generally better from a legal liability point of view. The customer should be liable for turning services on and off.
We hear about this all the time from AWS customers and its a large reason why people connect their account to Vantage which will help alert you if costs change intra-month. The first $2,500 in AWS costs per month are tracked for free so I thought I'd mention this here for potentially being helpful to the community.
If you don't want to remember to set up billing alerts, we provide basically a turn-key experience around this that takes less than a few minutes to setup: http://vantage.sh/
To everyone claiming "ohhh that's illegal/unethical" I say to you: take it in your favor for once. For every 100 clients aws bills unexpectedly and with no controls in place to mitigate, you can be the 1 who gets a free month of service. They will not pursue you for $5. Imagine making the argument for welfare on a company that is worth a trillion dollars.
The rationale against doing this is as much practical as it is moral --- unless you're just doing this once for a single month and don't care if your account gets banned. AWS isn't like an auto-renewing subscription, where if the card declines, your service is cut off. They won't charge the card with a $5 limit until the end of the billing period. If you rack up more than $5 in charges in a billing period, you will be in debt to Amazon. They will certainly ban your account, so you'd have to make a new throwaway account with a new disposable CC each month.
You are not doing delivering some sort of poetic justice, you are just showing your lack of self-preservation instinct. For your own welfare, just don't poke the bear. You don't wanna get blacklisted for doing some dumb crap that will come bite you in the ass someday.
There are enough stories running around of people getting their job accounts banned by association for pulling idiotic stunts like these, and we don't know what crap Amazon will be running in the future.
AWS bills work a lot like postpaid phone bills. When you use the service you agree to pay the bill for usage.
Your suggestion is kind of like saying “If your card declines you don’t have to pay for your meal.” Not really true.
In my experience AWS support has been good about reversing accidental/fraudulent usage charges and helping to prevent them in the future.
I was thinking it would be useful if Organizations could pre authorize users at $X before preventing them from doing more - of course the better solution is to manage releases through a pipeline that checks for stuff created and code scanning and... whatever
In the end, we use cost monitoring, but no AWS billing alerts
I think it is.
A few jobs ago, the boss of my boss got fired for a cloud service overage. Not a huge amount; the number on the grapevine was around $10,000. But it was enough.
For many (numerically "most," probably) companies, the IT department is a black box to upper management, and any unexpected budget overages are a serious problem.
People do ask for alerting and monitoring but that's not a hard stop.
Then you get complex issues such as S3 and EBS. As long as there is data you will keep paying so what do you do? Have a hard limit but not really since it doesn't cover them? Delete people's data?
The real reason is that if you give companies a budget feature, they will inevitably, you know, use it. They'll set a budget that seems 'reasonable', and then freak out when everything turns off when it's exceeded, and then go raise it a little bit, and repeat the cycle.
Compare that to now where every place I've ever worked basically seems to forget that cloud hosting costs even exist, based on how much most companies balk at paying for simple SaaS tools for developers but will happily let the hosting costs grow to astronomical amounts. They're happy to do it cause they just see a line item and accept it. If you give them budgets, that won't happen any more.
https://docs.aws.amazon.com/cli/latest/reference/ce/get-cost...
I prefer to be billed whatever it costs but have my service up all the time.
It's just not that relevant ...
https://docs.microsoft.com/en-us/azure/azure-resource-manage...
https://docs.microsoft.com/en-us/azure/azure-resource-manage...
Look... just be happy that they made it PAINFULLY obvious when you make S3 buckets public
Those guys in finance know there are people whom will pay any bill.
When I was younger I would just pay for even mistakes because I was concerned with my credit number.
AWS is for businesses and hard limits on spending is a liability for their pricing structure. Imagine you run a small business built on AWS and you hit your limit -- you're basically asking AWS to dismantle your business. They'd have to null-route traffic directed to you, shut down your servers, delete your data, de-allocate your IP addresses, etc. Your business won't be any better off than if you went bankrupt from a huge AWS bill.