$ getent services gopher
gopher 70/tcpGenerally host and port mapping gets shoved somewhere into the configuration management layer and hopefully does not become too complicated (or have too many security holes) as this could vary from "configuration files and a few scripts" to database and services layers that few can debug, especially not a sysadmin at 3 AM in the morning running on an hour of bad sleep. Hypothetically.
also `curl -v http://127.0.0.1:gopher/` gives error message
* URL rejected: Port number was not a decimal number between 0 and 65535
* Closing connection
curl: (3) URL rejected: Port number was not a decimal number between 0 and 65535
so the ports are named, it is nice, but in practice it does not make life easier.“http://127.0.0.1:70/“ is unambiguous since 70 is not a valid host.
nc is for generic connections and handles it well.
You can still publish port numbers along with addresses in DNS though (SRV records).
It's never.
After decentralisation we always see decentralisation. After a period of growth, a decline will follow. After the vibe coding hype, consolidation will follow. After rain comes sunshine.
“[X] is so powerful that problems which are technical issues in other programming languages are social issues in [X].”
— <https://www.winestockwebdesign.com/Essays/Lisp_Curse.html#ma...>
The article is short; go read it then come back and delete.
http:// means port 80 unless specified otherwise
https:// means port 443 unless specified otherwise
ftp:// means port 21 unless specified otherwise
sftp:// means port 22 unless specified otherwise
...
The practical solution for TFA is actually just an nginx server running on port 80 with proxy_pass
location /blog/ {
proxy_pass http://127.0.0.1:3000 ;
}
location /tensorboard/ {
proxy_pass http://127.0.0.1:6006;
}
... server {
listen 80;
server_name "tensorboard.localhost";
location / {
proxy_pass http://127.0.0.1:6006;
}
}
server {
listen 80;
server_name "blog.localhost";
location / {
proxy_pass http://127.0.0.1:3000;
}
}
HTTP 1.1 and later will have the browser supply the domain name that was used to access the site, and even though *.localhost all resolve to 127.0.0.1, nginx will pluck out the correct configuration and proxy_pass the correct one.https://meta.wikimedia.org/wiki/Cunningham%27s_Law
Sidenote: A good AI would interject, Clippy-like, "It looks like you're trying to recreate /etc/services. Would you like me to explain what that is?"
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
But go ahead. /etc/services, please, share with me how it's setup to do thing likes create the HTPS and makes it trusted and sets up the domain. Go ahead.
Go ahead. You can ONLY use /etc/services.
Or, you are admit you don't actually have a clue as to what /etc/services does.
And that none of us can get bothered to "Google" if a thing that does the same thing already exists. Currently vibing, on a train, a spaced repetition thing for my kid - because I needed a specific list of countries - and its faster to create the whole app rather than figure-out how to find one that would do this.
E.g. "telnet localhost ssh" takes you to port 22 (not the default 23 for telnet). This works because /etc/services maps "ssh" to "22".
If you're sick of remembering port numbers, create some entries in your /etc/services.
Of course, only programs which use getservbyname to resolve port numbers will accept your names.
i dont know why people keep insiting on that file while there are perfectly fine commands to pull from your boxes what is holding what port.
that is all besides the point though if you look at what you should be doing and keeping all this information in some kind of asset management system from which you can deploy things (which is kinda what k8s and docker etc. try to do (miserably)).
unless you are binding stuff to random ports on random boxes there is no need to do any of it at runtime and you can just consult your bookkeeping (for which etc services lacks a lot of details to use...)
Example from the website:
- "dev": "next dev" # http://localhost:3000
+ "dev": "portless myapp next dev" # https://myapp.localhost
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
> So I built local.vibe — a friendly dashboard and local .vibe hostname for every local web app on your Mac. No more localhost:3000 vs localhost:5173 roulette.
> The whole thing communicates over a Unix socket acting as a reverse proxy. No external services, no accounts, no telemetry.
We’re discussing a tool that is designed for – and is only capable of – routing traffic to localhost. It’s perfectly reasonable to point out that there’s an easier solution for this use case.
example.com, and the reserved TLD ".example", exist for technical documentation and writing. If you are writing a comment on HN, or a curriculum for a networking class, then you can discuss "foo.example.com connects to bar.example.com" or "Let's hypothesize about two offices called accounts.example and human-resources.example"
The "example" domains are never supposed to reflect anything that is actually deployed onto LANs, or test labs, or the Internet, current situation notwithstanding.
https://en.wikipedia.org/wiki/.example
There are, likewise, IPv4 and IPv6 ranges that are reserved to be used in documentation. Not the 192.168.0.0/24 or 10.0.0.0/8, but separate ranges that writers only write about, and are never deployed, not even in private.
localhost is only ever going to be the loopback interface, never across a network: https://en.wikipedia.org/wiki/.localhost#Conventional_use
See also: https://en.wikipedia.org/wiki/.test
The latter article lists foreign-language TLDs which serve the same purpose.
Some proposals are described here: https://en.wikipedia.org/wiki/.home
I've also come across projects using a public DNS record that points to 127.0.0.1 (something like localtest.me?). IMO that's way worse than using .localhost since you're trusting some rando not to change the DNS records and exfiltrate your meant-to-be-local traffic.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
Yes, that's why I said I know it was mixing of layers.
However ports are a layer violation in a strict sense, introduced as a workaround because there was no easy way to just add thousands of new IPs to a single host back in the IPv4 days. No need to continue a workaround that causes grief on a daily basis.
Something involving socat, an any-IP / TCP routing rule, a VPS or other machine with a ipv6 /64 and plenty of duct-tape.
You'd get an application sitting on port 80 accessible via some unique ipv6 address (in the /64) on a tcp port 80. They needn't be the same port number but it would make it easier.
[0]: https://idiallo.com/blog/say-no-to-localhost3000-use-custom-...
Super simple. (although I use rewrites at my dns layer for the whole local lan, but whatever)
It also solves issues my password manager has with multiple services on the one host but with different ports, but putting each on their own 2nd level domain.
I find out what all my local servers are by `cat /etc/hosts`, because I put them in there. They run using an entry in the nginx config.
For short-lived stuff I don't even bother with that, I just use `whatever.localhost`.
If there was no LLM, author would have put a little more thought into this, maybe did a google search, and realised that all he needed were two shell scripts.
The more you use LLMs, the less you actually think
> The real annoyance is that it wasn’t just one machine. It was layers.
> I wanted a simple launcher for all the things that aren’t traditional desktop apps. Not Finder, Alfred or Raycast.
The entire damn article is like this - why would I trust software to run on my local machine when it was written by someone who did not even take care writing a blog post? How much care would they have possibly put into reviewing their vibe coded slop if they couldn't even bother to review their blog post?
That seems to run orthogonal to this. The primary benefit I see here is not having to care at all what ports apps are actually starting on. Just run them, and access them by name. Same as a regular website on the internet where one doesn't care about the IP.
> How much care would they have possibly put into reviewing
Just enough to ensure that it works for them, which is what really matters. Others go in knowing that as well, and add/change that base to their own preference. That's the world we're now in.
If that's what really mattered they wouldn't have posted an article they didn't write trying to get traction on a product they didn't create from a userbase that doesn't need it.
Doesn't matter if they didn't actually write the code, but they put effort into refining an idea for a problem into a solution that fit their needs, and in sharing they've given those who never thought of it a base they can work from if they want or just go make something similar from scratch.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Not too long ago I had a similar issue and solved with that.
* Given that you can easily start up your own CA in a test bed, just use different domain names.
* Or use IP addresses directly, given that IPv6 i pretty abundant it's easy to just listen on many addresses at the same time. A nice thing is to just put the port number is the last octets: fd01::9000, fd01::0003:5565. If it's HTTPS you always use port 443, if it's another protocol, use another port. With iptables/nft you can translate all port 443 traffic towards a /96 to a single IP.
* Firefox does not seem to understand unix domain sockets, https://news.ycombinator.com/item?id=27941552. I'm assuming that you have a gateway in front that handles that aspect.
* Proxies in Firefox seems to understand that though, which means you can have a proxy that translate to unix sockets locally. That means you can basically run it to a namespaced application, using only http://<service>.localhost.
Granted no fancy UI to start and stop things but is it really needed?
Tbh this is not a single binary you need dnsmasq go and other things
I've been wanting something like this for local Dev, but I think more:
Per user DNS.
So if the process doing the lookup is my own then redirect to the named service.
> The real annoyance is that it wasn’t just one machine. It was layers.
If you type "8483" on T9, your phone may offer "THUD" or "TITE" or all three, as choices.
But with a normal telephone keypad, if you dial, e.g. "(800) 555-VITE" then you will always dial "8483".
https://en.wikipedia.org/wiki/Phoneword
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
8483/tcp
or 8483/udp
or 8483/sctp
or 8483/dccp
etc.A TCP stream, for example, consists of a tuple:
src:port1 dst:port2173 looks like ITE
5 in roman numerals is V.
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`". But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
https://www.reddit.com/r/msp/comments/1pxe1zc/tplink_ban_pro...
https://www.pcmag.com/news/facing-router-ban-tp-link-tells-f...
https://www.nytimes.com/wirecutter/reviews/foreign-made-wi-f...
Just search for "TP Link ban", you will see a lot of news. I switched to SonicWall + Ubiquiti + my own monitoring software to be safe. I should've done it years ago, but I was lazy.