We chose nomad there, because it's a business requirement to be able to self-host from an empty building due to the data we process - that's scary with K8. And K8 is essentially too much for the company. It's like 8 steps into the future from where most teams in product development and operations are. Some teams haven't even arrived at the problems ansible solves, disregard K8.
The hashicorp stack with Consul/Nomad/Vault/Terraform allows us to split this into 4-8 smaller changes and iterations, which allows everyone to adjust and stay on board (though each change is still massive for some teams). This results in buy-in even from the most skeptical operations team, who are now rolling out vault, because it's secure and solves their problems.
Something that overall really impressed me: One of our development teams has a PoC to use nomad to schedule and run a 20 year old application with windows domain requirements and C++ bindings to the COM API. Sure, it's not pretty, it's not ready to fail over, nomad mostly copies the binaries and starts the system on a prepared, domain joined windows host... but still, that's impressive. And it brings down minor update times from days to weeks down to minutes.
Being able to handle that work horse on one hand, and flexibly handling container deployments for our new systems in the very same orchestration is highly impressive.
Let me know if you have any questions. I plan to write an article/kind-of-tutorial about the setup.
EDIT: Downvoters, I'm really curious what you're objecting to above, specifically.
Bare metal nomad - use with consul and hook up traefik with consul backend. This would be the simplest, most ”zero conf”, way to go.
I’ve used this setup for a few years heavy production use (e-commerce & 50 devs)
As consul presents SRV records you can hook up a LB using those, or use nomad/consul templating to configure one.
Service mesh with mTLS is actually rather approachable and we’ve deployed it on selected services where we need to track access and have stricter security. (This however had us move off traefik and in to nginx + openresty)
Now if you want secrets management on steroids you’ll want vault. It’s really in many ways at the heart of things. It raises complexity, but the things that you can do with the nomad/consul/vault stack is fantastic.
Currently we use vault for ssl pki, secrets management for services & ci/cd, and ssh cert pki.
These things really form a coherent whole and each component is useful on it own.
Compared to k8s it’s a much more versatile stack although not as much of a “framework” and more like individual “libs”.
I always come back to the description: “more in line with the unix philosophy”.
In a mixed environment where you have some legacy and/or servers to manage I think using the hashicorp stack is a no brainer - consul and vault are tools I wouldn’t want to be without.
> load balancer provider
consul connect handles this, how you get traffic to the ingresses is still DIY... kinda. you can also use consul catalog + traefik (I've actually put in some PRs myself to make traefik work with a really huge consul catalog so you can scale it to fronting thousands of services at once). there's also fabio. you can also get bgp ip injection with consul via https://github.com/mayuresh82/gocast run as a system job to get traffic to any LB (or any workload) if that's an option.
i've also ran haproxy and openresty without any problems getting stuff from consul catalog via nomad's template stanza and just signaling them on catalog changes.
> storage provider
anything CSI that doesn't have a 100% reliance on k8s works. if you're also just running docker underneath you can use anything compatible with docker volumes, like Portworx.
> ingress controller
consul connect ingress! or traefik, both kinda serve double duty here.
> external DNS
no good story here -- with one exception, if by "external" you mean "in the same DC but not the same host," consul provides a full DNS interface that we get a lot of mileage out of.
if you're managing everything with terraform though there's no reason you can't tie tf applies to route53/ns1/dyn or anything else though!
> monitoring
open up consul/nomad's prometheus settings and schedule vmagent on each node as a system job to scrape and dump somewhere. :)
we also use/have used/will use telegraf in some situations -- victoriametrics outright accepts influx protocol so you can do telegraf/vector => victoriametrics if you want to do that instead.
> secret encryption
this is all vault. don't be afraid of vault! vault is probably hashicorp's best product and it seems heavy but it's really not.
there's a lot here that doesn't really compare at all, like the exec/raw_exec drivers. we use those today to run some exotic workloads that do really poorly in containers or that have special networking needs that can map into containers but require a lot of extra operational effort, e.g.: glb-director and haproxy running GUE tunnels.
something interesting about the above is i'm testing putting the above in the same network namespace, so you can have containerized and non-containerized workloads in the same network cgroup namespace so you can share local networking across different task runners.
Consul is nice and easy to use.
Nomad has been a painful experience: the default UI is confusing (people accidentally killed live containers), we have some small bits and pieces that don't quite behave as we expect and have no idea how to fix them. Error rate is too low to care and there are more pressing issues so likely WONTFIX. We often found ourselves looking into github issues for edge cases or over-allocating resources to overcome scheduling problems.
We considered just switching to their paid offering, just not to have to worry about this.
It kind of feels like that's their business model: attract engineers with OSS software and then upsell the paid version without all the warts.
Please post it here when you do. :)
I'm in for an industrywide rename :D.
Bear Metal Semiconductor. Bear Metal Fabrication. Bear Metal Labs. You could get so creative with the branding and logo.
One of the points I'd highlight in this post is just how good the combination of nomad, consul, and consul-template is. Even when nomad lacks some sort of first-class integration that k8s might have, the combination of being able to dynamically generate configuration files using consul-template plus nomad automatically populating the consul catalog means that you can do almost anything, and without much hassle. I use consul-template in the homelab to dynamically wrangle nginx, iptables, and dnsmasq, and it continues to work well years after I initially set it up.
I often wish I had the luxury of relying on vault+consul+nomad in all the environments I operate in.
If you integrate properly with them (which does take quite a bit of work with the ACL policies and certs), it really starts shining. With terraform, of course.
For these core services themselves and other OS packages, I use ansible, mostly because of the huge resources in the community.
It's fun and doesn't come with all the cognitive overhead of k8s. I'm a fan and will tell everyone they should consider Nomad.
It's obviously less mature, though. One thing that has been frustrating for a while is the networking configuration - a simple thing like controling which network interface a service should bind to (internal or external?) was supposedly introduced in 0.12 but completely broken until 1.0.2 (current version is 1.0.3).
Consul Connect is really awesome conceptually to make a service mesh, but is also just coming together.
There are really only two things I miss dearly now:
1) exposing consul-connect services made by Nomad (aka ingress gateway). It seems to be theoretically doable but requiring manual configuration of Connect and Envoy. If you want to expose a service from the mesh through e.g. an http load balancer, you need to either expose it raw (losing the security benefits) or manually plumb it (no load balancer seems to play nicely with connect without a lot of work, yet)
2) Recognize that UDP is a protocol people actually use in 2021. This is a critique of the whole industry.
Two bastion hosts/lbs sharing a virtual IP (keepalived), with two Traefik instances each (private and public). I actually schedule them through Nomad (on the host network interfaces) as well - since they solved the host networking issue I mentioned above it's properly set up with service checks. Super smooth to add and change services with consul catalog, and ACME-TLS included.
Things I don't like that make me want to try envoy instead:
* It eats A LOT of CPU once number of peers ramp up - there's an issue on their GH that this seems to have been introduced in 2.1.
* UDP is completely broken. For the time being I'm doing per-job nginxes for that until I have a better solution.
* It's very brittle. Did you ever misspell the wrong thing in your label? If so you probably also wondered why half of your services all stopped working as Traefik arbitrarily let it hijack everything.
* The thing I metioned above with Consul Connect. Each service can integrate with either but not both.
It was great for months though, but I guess I grew out of it just by the time I started properly understanding how all the configuration actually works (:
Kubernetes is exceedingly complex, and it needs to be, to support all it's use cases. However, I would claim that the vast majority of small project running on Kubernetes is just after one thing: Easier deployments.
These users would be just as well served, if not better by Nomad. It eliminates much of the complexity of running a Kubernetes cluster, but it still utilizes hardware better and you get a convenient way to deploy your code.
I know there's Dokku, https://flynn.io/ looked super promising but I think it's basically dead now, same for Deis that is dead and forked to https://web.teamhephy.com/.
Just deployed a small 3 node cluster in prod last week for this: run some binaries with some parameters, make sure they restart on failure, have them move to another node if one fails or is rebooted, and don't waste resources by having a classical active/passive 2 node setup that doesn't scale either.
It took me a couple of days to read the documentation which is good but not always up to date (I did have to dig in some github issues in one case), create a test setup and check failure scenarios. Gotta say I'm mostly impressed.
I understand JSON as a simple interchange format between systems, and is here to stay, but I don't understand all this YAML stuff, with all its quirks, from the K8s/DevOps people, when we have the much nicer HCL...
For anyone not used to HCL: https://github.com/hashicorp/hcl
HCL is more information-dense.
Loops and conditionals in HCL could still use some real work though. They are still clunky to work with.
If you mean a standard configuration file format, that has almost never happened in the entire history of computing. There are standard data formats, sure, but to standardize a configuration file, all the applications need to be limited to the functionality expressed in a single config file format. Most applications hate that, because they want to add infinite features, and none want to wait to standardize some new version of a config file in order to release their features. (If they release before the config file changes happen, now you have a vendor-specific extension, which means waving goodbye to standardization)
The following are the only configuration file formats I am aware of that have been standardized:
- bind zone configuration
- inittab
Crontab isn't standard because different crons have different features. passwd, group, shadow, etc aren't standard configurations because they are actually databases. And anything else which goes over the wire instead of a file is a protocol, not a configuration.Now, it's still not a bad idea to have standardization. The trick is to make it so they can change their configuration, but still be compatible with each other, and the general way to do that is to abstract the configuration elements via schemas. That way you can have the same basic functionality for each app, but they can define it however they want so that it can be re-mapped or interpreted by a different program.
However, that is in no way simple for a regular user to use. So to convince app developers to have a "standard configuration format", you need to reframe the value proposition. What's the value add? If you say it's just to a user can use one config file on many competing products, the developers won't care about what, because they want you to use their product, not be able to move to another one. If instead you reframe it as "extensible language for integrating and composing multiple independent pieces of software/functionality into one distributed system", then their ears might perk a bit. Basically, reformulate the proposed solution as a language to compose programs, like Unix pipes, but slightly more verbose/abstracted. The end result should be able to be read by any program supporting the config format, and "declaratively" (modern programmers love this ridiculous cargo cult) compose and execute distributed jobs across any system.
The auto complete will give you the top level command and that's it.
I haven't looked too hard because I got fed up, but i just want to dump the entire config so I can replicate it without having to remember each thing I configured.
While the Web UI is probably the best vault explorer available, you might want to take a look at Vaku[1].
[1]: https://github.com/lingrino/vaku/blob/main/docs/cli/vaku.md#...
This makes it a lot easier to get a 1000ft view of the configuration.
My best experience writing config files has been with Aurora config files for mesos deployments in pystachio-Python.
Once you’re used to using a real programming language to write/generate config, you never want to go back to YML.
I really like Dhall as a configuration language and use it across my whole system, so it's a shame I'm forced to use HCL or write a dhall backend for it.
I'm going to post the obligatory XKCD comic because your comment is exactly what this was created for: https://xkcd.com/927/
Would be interesting to see how this applies to the author's use of Nomad. It's easy to shit on Kubernetes because of its complexity, but this article seems to be comparing the Nomad Day 0 experience with the Kubenrnetes Day N experience.
I'm firmly of the opinion that you don't need much more than systemd (or equivalent) + SSH for your home server.
Happy to try to answer specific questions.
Genuinely curious about the load of managing this on the infrastructure team.
Choosing k8s is just the first step - you have to deal with what 'distribution' of k8s to use, upgrades, pages of yaml, secrets stored in plaintext by default...
Once you've got Nomad running, it just works.
I help people move to Nomad, my email is in my profile if you want to chat :)
Of course, not for a home lab :)
Not really, Podman and containerd are two different technologies, although both allow you to move away from Docker for various reasons (smaller CPU, memory footprint, better security etc). If you are invested into Red Hat container stack, podman makes more sense. However containerd is more universal.
K8s is already moving away from docker, and directly into containerd. Most recently they deprecated dockershim, and users now need to switch to containerd (since docker also uses containerd under the hood, and it doesn't make sense for the orchestration system to run a monolithic service like docker where it just need to launch the workloads)
Some reference links of k8s or PaaS build on top of k8s moving to containerd
- https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-...
- AWS Fargate: https://aws.amazon.com/blogs/containers/under-the-hood-farga...
- Azure kubernetes service (AKS): https://docs.microsoft.com/en-us/azure/aks/cluster-configura...
This driver is similar to what CRI-containerd is doing in kubernetes (if you are coming from the k8s world)
We're a super small team and I have to say our experience operating Nomad in production has been extremely pleasant - we're big fans of Hashicorp's engineering and the quality of their official docs.
Our biggest gripe is the lack of managed Nomad offerings such as GKE (on google cloud). However once things are set up and running it requires minimal to no maintenance.
I also run it locally for all sorts of services with a very similar experience.
As another comment mentioned, it's more of a better / distributed scheduler such as systemd. The ecosystem of tools around it is the cherry on top (terraform, consul, fabio ...)
[0]: https://monitoro.xyz
We've had two main outages in months:
- Server disks were filling up and we hadn't set up monitoring properly at the time (ironic for the name of our company :) ). Not Nomad's fault.
- A faulty healthcheck caused all the servers of a cluster to restart at the same time, which caused complete loss of the cluster state (so all the jobs were gone. I like to call it a collective amnesia of the servers).
We're still looking for a good/reliable logging and tracing solution though. Nomad has a great dashboard, but only with basic logging, and it only gets you so far.
Overall, would recommend again!
Recently I deployed K3s on the same node to add some new workloads, but now that I want to move those workloads to Nomad to get rid of the cpu and memory usage of K8s. I'm running into what is becoming my main problem with Nomad that I never thought I had.
With all it's complexities, getting something to run on K8s is as simple as adding a Helm chart name to a Terraform config file and running apply. Maybe I need to set a value or volume, but that's it. Everything below is pretty much standardised.
With Nomad however, it's benefit of doing only one thing very well also means that all the other things like ingress and networking need to be figured out yourself. And since there is no standard regarding these everyone invents their own, preventing something like k8s-at-home [0] to emerge. Also K8s is pretty agnostic in container backend where Nomad needs configurations for every driver.
I think writing your own Helm charts for everything would suck more than writing the Nomad configs. Though a lot could be automatically generated for both of them. But I'm missing a community repository of sorts for Nomad.
Author here. Ha, nice coincidence indeed :)
> But I'm missing a community repository of sorts for Nomad.
Indeed. I think it's time to build one! Indeed
> - Group: Group is a collection of different tasks. A group is always executed on the same Nomad client node. You'll want to use Groups for use-cases like a logging sidecar, reverse proxies etc.
> - Task: Atomic unit of work. A task in Nomad can be running a container/binary/Java VM etc, defining the mount points, env variables, ports to be exposed etc.
> If you're coming from K8s you can think of Task as a Pod and Group as a Replicaset. There's no equivalent to Job in K8s.
Is that right? I haven't used Nomad (yet?) but, as described, it sounds to me more like Job ~= Deployment; Group ~= Pod; Task ~= Container?
A Task is an individual unit of work: a container, executable, JVM app, etc.
A Task Group/Allocation is a group of Tasks that will be placed together on nodes as a unit.
A Job is a collection of Task Groups and is where you tell Nomad how to schedule the work. A "system" Job is scheduled across all Nodes somewhat like a DaemonSet. A "batch" Job can be submitted as a one-off execution, periodic (run on a schedule) or parameterized (run on demand with variables specified at runtime). A "service" Job is the "normal" scheduler where all Task Groups are treated like ReplicaSets.
Placement is determined by constraints that can be specified on each Task Group.
I could very well be wrong, but this is my understanding.
Job ~= Deployment
Group ~= Replicaset/Pod combined
Task ~= container in pod
K8s is enterprise stuff and should be seen as such. It's complex because the problems that are being attempted to solve are complex and varied. That doesn't discount that Nomad is a great piece of software though.
https://www.hashicorp.com/blog/hashicorp-nomad-meets-the-2-m...
So what's an example of "enterprise" and "complex" problems you think it can't do?
Some are mentioned in the post but I can think of secret management, load balancing, config management, routing, orchestration beyond workloads (e.g. storage), rollbacks/rollouts and many more. Perhaps in few areas there are some support but it's not what Nomad intends to do anyway. I also like these points from another comment: https://news.ycombinator.com/item?id=26142658
In order to supply those needs in Nomad you'll need to spend time finding out how to integrate different solutions, keep them up-to-date, etc. At that point, K8s may be a better answer. If you don't need any of those, use Nomad or anything else that's simpler (e.g. ECS if you're in AWS, K3s if that's simple enough for your home server, etc).
Well, I do run K8s in prod at my org. And my comparison was based off my experience with it running in prod.
> It's complex because the problems that are being attempted to solve are complex
Most people just want to deploy their workloads in a consistent, repeatable manner. That's why they look to run an orchestrator. K8s is the most popular choice but it's time we look at other tools which can also help us reach that goal with less headache.
Who would you say are in the target audience of Kubernetes?
I doubt most medium to large companies I see implementing Kubernetes could be considered a good fit for Kubernetes. If you want to run on-prem / colo you are probably better of with something simpler like Nomad. If you want Kubernetes it's probably a better idea to use a hosted Kubernetes solution like Google's offering. For most teams it's probably too much complexity to be able to maintain, troubleshoot, secure, update, etc.
Everything else - I literally said "K8s is enterprise stuff". Now if we go to specifics then it depends of what the company does, maybe they'd do just fine with Nomad or a managed solution like ECS.
>If you want Kubernetes it's probably a better idea to use a hosted Kubernetes solution like Google's offering.
Well I agree? All of the big companies I've been in used EKS and before EKS was decent there was some maintenance overhead. It'd still be less than the maintenance overhead to maintain the full list of features that K8s provides with Nomad, as Nomad doesn't provide any of those and you'd need to seek solutions outside of the product and try to fit them in.
The same way you'd not buy a car if you're going to drive yourself quarter of a mile once a week, you'd not use such a complex solution to run a few dozen containers.
Our path has been Ansible -> Ansible+Docker -> Docker Swarm -> k8s. We absolutely don't need k8s, but the other options all had downsides.
1. Nomad was on our list and probably would've been better, but there were no managed Nomad solutions at the time and it was not as widely used as other solutions
2. Our time on Swarm was /ok/, but it was more and more obvious that being on the lesser walked path was a problem, and it's future made us run away from it
3. k8s gave us a nice declarative deployment mechanism
4. We can switch to a managed solution down the road with less friction
This may not be true in the future with distributions like k0s[1]
The comment you’re replying to already said whom.
Indeed.
On the same bus one should be reading posts like "how I switched to a minivan for my family and dropped the complexity of enterprise-grade multi-carriage trains".
I write all of my kubernetes resources in terraform because I don't want to fight with helm charts. I was going to have to write something to monitor my deployments anyway and alert my co-workers that their deploys failed so why not just use terraform that tells you:
- what will change on deploy - fails when a deploy fails - times out
I didn't want to tell developers to kubectl apply and then watch their pods to make sure everything deployed ok when terraform does this out of the box..
Our initial rollout on kubernetes had me writing about 30 helm charts for internal services. Once we saw helms shortcomings then they were converted to terraform. It was easy if you:
- helm template > main.yaml - use k2tf (https://github.com/sl1pm4t/k2tf) - some manual cleanup for inputs and such.
So now all of our product is terraformed, each as a module deployed to a namespace as an entire stack.
> Consistent experience of deployment by testing the deployments locally.
> (Not joking) You are tired of running Helm charts or writing large YAML manifests. The config syntax for Nomad jobs is human friendly and easy to grasp.
Seems like most of the ux issues stem from configuring kube easily. I wonder what the author would say about using a metaconfig language like cue or jsonnet to make is super easy to define new workloads.
Since then I have been a fan. Also worked a lot with Kubernetes, which has its merits, but the simplicity of Nomad is great.
You should be able to bind to the anchor ip to make the floating ip work. https://www.digitalocean.com/docs/networking/floating-ips/ho...
Thank you for the tip. I'll try it out.
Maybe this has improved these days, I'll have to give it another shot.
> Nomad shines because it follows the UNIX philosophy of "Make each program do one thing well". To put simply, Nomad is just a workload orchestrator. It only is concerned about things like Bin Packing, scheduling decisions.
so is the stuff about saner config syntax
'k8s is bad at everything and nomad is only trying to be bad at one thing' actually makes sense of my reality
I guess this an ecosystem problem, but in general I find Hashcorp documentation (SRE/Ops/Admin specific) lacking.
I agree that you probably don't need Kubernetes, and perhaps yeah it could be considered complex.
But I think it's the right fit for most developers/doers & over time most operators too. Kubernetes is not Kubernetes. Kubernetes is some base machinery, yes, but it's also a pattern, for writing controllers/operators that take Kubernetes Objects and turn them into things. Take a Postgres object and let postgres-operator turn it into a running, healing, backing-up replicated postgres cluster. Take a SQS object and let ACK turn it into a real SQS. Take a PersistentVolume and with Rook turn it into a Ceph store.
Kubernetes & cloud native in general proposes that you should have working models for the state of your world. In addition to the out-of-the-box machinery you get for running containers (deployment-sets), exposing them (services), &c, you get this pattern. You get other folks building operators/controllers that implement this pattern[1]. You get a consistent, powerful, extensible way of building.
Nothing else comes close. There's nothing remotely as interesting in the field right now. The Cult of Easy is loud & bitterly angry about Kubernetes, hates it's "complexity", but what is actually complex is having a dozen different operational environments for different tools & systems. What is actually complex is operating systems yourself, rather than having operators to maintain systems. Kubernetes has some initial costs, it can feel daunting, but it is radically simpler in the long run because _it has a paradigm,_ an all inclusive paradigm that all systems can fit into, and the autonomic behaviors this paradigm supports radically transfer operational complexity from human to computer, across that broad/all-inclusive range of systems.
There's a lot of easier this/harder that. No one tries to pitch Nomad or anything else as better, as deeper, as being more consistent, having a stronger core. Every article you hear on an alternative to Kubernetes is 98% "this was easier". I think those people, largely, miss the long game, the long view. A system that can adapt, that operationally can serve bigger & bigger scopes, ought to pay dividends to you as years go by. Kubernetes may take you longer to get going. But it is time enormously well spent, that will increase your capability & mastery of the world, & bring you together with others building radically great systems whether at home[2][3] or afar. It will be not just a way of running infrastructure, but help you re-think how you develop, and how to expose your own infrastructure & ideas more consistently, more clearly, in the new pattern language of autonomic machines that we have only just begun to build together.
I encourage the bold explorers out there, learn Kubernetes, run Kubernetes. And to those of you pitching other things, please, I want you to talk up your big game better, tell me late-game scenarios, tell me how your system & I are going to grow together, advance each other.
[1] https://kubernetes.io/docs/concepts/architecture/controller/...
But if you do feel ambitious, k8s + flux gitops toolkit + tekton CI/CD + knative serving & eventing + skaffold is one heck of a productive and amazing stack for development (and for bonus points switch your code to a bazel monorepo and rules_k8s for another awesome experience).
Some what-color-do-we-paint-the-bikeshed comments on your particular tools:
* flux seems to be doing great. the ondr0p home-cloud repo i linked is built around it.
* tekton looked very promising when i was evaluating event-driven-architecture systems ~18 months ago, but since then, they've re-branded as a CI/CD tool. it's just branding, it's still generally useful, but i very much worry about drift, & using a product against-the-grain from how the community around it uses it. i think there is a really epically sad story here, that this is a huge mistake for Tekton, which is much more promising/useful than "CI/CD" alone allows. talked about this some two weeks ago[1].
* knative was on my todo list. it's resource requirements are fairly daunting. i'm trying to pack a lot of work on to my 3GB k3s VPS and knative seems right out. it's weird to me that requirements are so high. serving seems a bit on the complex side but useful abstractions, they make sense. eventing is very high in my interests, and i would prefer having an abstraction layer over my provider, give myself freedom, but again the cost seems very high.
* need to try some skaffold. i don't know where it would fit in my world yet, kind of forget it some.
* k8s, tekton, knative, skaffold are all somewhat from the googleverse. honestly i'm hoping we see some competing takes for some of these ideas, see different ideas & strategies & implementations. kubernetes is such great material for innovation, for better ways of composing systems. let's try some stuff out! please kindly think of those who don't have a lot of memory too.
Have you looked into nomad, consul and vault; along with everything they provide?
My Nomad familiarity is definitely a bit on the low side, & that's something I wouldn't mind changing. Consul & Vault I used to be able to operate & understood ok, but my knowledge has faded some.
Taking that process home and doing all that instead of just running nginx from your distro repos on the sbc OS is cargo cult insanity for 99% of cases.
Also, while I don't wanna dispute the effectiveness of it, you should evaluate if "just running nginx from your distro repos" is fine under your threat model when you start exposing stuff to the internet or start introducing services that handle personal information.
I dunno about you but I don’t download Ubuntu’s package repo. I have to run an install command and then customize nginx. What’s the real logistical difference to the user if they run this command or that command? Or set config values in this file or that?
Why are you using Linux at home? Unix is for servers!
Nginx “replaced” Apache. Did we expect nothing would replace it?
>Nomad is also a simpler piece to keep in your tech stack. Sometimes it's best to keep things simple when you don't really achieve any benefits from the complexity.
Simple is better. But in this case he doesn't realize that he's stuck way up the complexity stack in a local minimum that's way more complex than most of the potential software landscape. None of that is needed to expose a webserver. Just run nginx on the machine from repos with no containers, no deployment, etc, etc complexity and forward the port.