The idea itself sounds fun though
Great for prototyping, really bad for exposing anything of any value to the internet.
(Not Anti-Al, just pro-sensible)
Also the "If you're Anti-AI please don't use this." is pretty funny :D I guess I must be "Anti-AI" when I think this kind of code is wild to rely on.
Is it because the AI can generate code that looks like it was made by a competent programmer, and is therefore deceiving you?
But whatever the reason, I think that if we use it as a way to shame the people who do tell us then we can be assured that willingness to disclose it going forward will be pretty abysmal.
I must be Doing It Wrong(TM), because my experience has been pretty negative overall. Is there like a FAQ or a HOWTO or hell even a MAKE.MONEY.FAST floating around that might clue me in?
1. Make prototype
2. Magic happens here
3. Make lots of $$$
Great for prototyping only makes it easier to get to step 2, but done correctly, it certainly does that.
As proven by the nice app I have running on my laptop, but probably won't make any money from.
Regardless; what benefits this would have over Wireguard?
I'm pro security. The gall to put something out there, pretend it being vibe coded is not a big deal and possibly exposing hundreds of people to security issues. Jesus.
Edit: should have mentioned I am a bootcamp grad, not just throwing random shade.
I gate access to my homelab using Wireguard.
Wireguard is widely deployed across the world, and has been worked on for years.
No random new repo that was vibe coded can measure up in the slightest to that.
Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
Use-cases:
1. helps auto-ban hosts doing port-scans or using online vulnerability scanners
2. helps reduce further ingress for a few minutes as the hostile sees the site is "down". Generally, try to waste as much of a problem users time as possible, as it changes the economics of breaking networked systems.
3. the firewall rule-trigger delay means hostiles have a harder time guessing which action triggered a IP ban. If every login attempt costs 3 days, folks would have to be pretty committed to breaking into a simple website.
4. keeps failed login log noise to a minimum, so spotting actual problems is easier
5. Easier to forensically analyze the remote packet stream when doing a packet dump tap, as only the key user traffic is present
6. buys time to patch vulnerable code when zero day exploits hits other hosts exposed services
7. most administrative ssh password-less key traffic should be tunneled over SSL web services, and thus attackers have a greater challenge figuring out if dynamic service-switching is even active
People that say it isn't a "security policy" are somewhat correct, but are also naive when it comes to the reality of dealing with nuisance web traffic.
Fail2ban is slightly different in that it is for setting up tripwires for failed email logins, and known web-vulnerability scanners etc. Then whispering that IP ban period to the firewall (must override the default config.)
Finally, if the IP address for some application login session changes more than 5 times an hour, one should also whisper a ban to the firewalls. These IP ban rules are often automatically shared between groups to reduce forum spam, VoIP attacks, and problem users. Popular cloud-based VPN/proxies/Tor-exit-nodes run out of unique IPs faster than most assume.
Have a nice day, =3
Also by collecting data on the IP addresses that are triggering fail2ban I can identify networks and/or ASes that disproportionally host malicious traffic and block them at a global level.
Personally I use fwknop for port knocking as it doesn't suffer from replay attacks as it's an encrypted packet. But still serves the same niche
Is knocking incredibly weak security through obscurity? Sure, but part of what it does is cut down on log volume.
It's not extra security but it is a little extra efficiency.
Wireguard has something like this built in though, the PresharedKey (which is in addition to the public key crypto, and doesn't reduce your security to the level of a shared-key system). It's still more work to verify that than a port knock however.
Every complex services running, is a door someone can potentially break. Even with the most secure and battle tested service, you never know where someone fucked up and introduced an exploit or backdoor. Happened too often to be not a concern. XZ Utils backdoor for example was just last year.
> Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
If there is no harm, who cares...
As a side note I just happen to be reading a book at the moment that contains a fairly detailed walkthrough of the procedure required to access the Russian SVRs headquarters in New York in 1995.
Think of this as an analogue version and in no way a perfect analogy but it does include a step that has more or less the same security properties as this… anyways here’s a relevant quote:
“After an SVR officer passed through various checkpoints in the mission’s lower floors, he would take an elevator or stairs to an eighth-floor lobby that had two steel doors. Neither had any identifying signs.
One was used by the SVR, the other by the GRU. The SVR’s door had a brass plate and knob, but there was no keyhole. To open the door, the head of the screw in the lower right corner of the brass plate had to be touched with a metal object, such as a wedding ring or a coin.
The metal would connect the screw to the brass plate, completing an electrical circuit that would snap open the door’s bolt lock and sometimes shock the person holding the coin.The door opened into a small cloakroom. No jackets or suit coats were allowed inside the rezidentura because they could be used to conceal documents and hide miniature cameras.
SVR officers left their coats, cell phones, portable computers, and all other electronic devices in lockers. A camera videotaped everyone who entered the cloakroom. It was added after several officers discovered someone had stolen money from wallets left in jackets. Another solid steel door with a numeric lock that required a four-digit code to open led from the cloakroom into the rezidentura.
A male secretary sat near the door and kept track of who entered, exited, and at what times. A hallway to the left led to the main corridor, which was ninety feet long and had offices along either side. ”
Excerpt from Comrade J by Pete Earley
As another funny side note… I once discovered years ago that the North Koreans had a facility like this that they used to run a bunch of financing intelligence operations using drugs in Singapore where I was at the time and thought it would be funny to go and visit. It was in a business complex rather than a dedicated diplomatic facility from memory. But as I recall it was a similar scenario of unmarked door with no keyhole.
Just skip the plaintext password (the sequence of ports transmitted) and use certificate based auth, as you note below.
Using WireGuard to gate access to a server. It looks like it's a VPN, not an access control mechanism. So I am curious how this works.
This is vibe coded security through obscurity, i. e. quite useless. Use Tailscale or a self hosted VPN.
My opinion is that being able to filter out noise and false positives from authentication logs allows you to improve your actual security measures.
An other advantage is that it may hide information about your system making it harder for an attacker to target you based on a broad scan without doing some (usually detectable) targeted reconnaissance first. For example imagine someone found a 0-day in one of the services behind the port-knock and is scanning for the vulnerable version.
It does however add another cog in the machine that may break.
The likelihood of someone is on the same network as you noticing your servic, try to hack it, before the TTL expires again is IMO quite low.
This is without taking into account that the services themselves have their own security and login processes, getting a port open doesn't mean the service is hacked.
IPv6 of course.
> or is it just not important
Port knocking not a security feature anyway.
This is what it feels like people using AI for everything.
AI is not good at telling you best solution but it will tell you that you can build it yourself since that approach is what AI is good at.
Using self hosted vpn, cloudflare zero trust or Tailscale is the easiest way to go.
I self host extensively and have multiple self hosted VPN(OpenVPN and WireGuard) along with Tailscale and cloudflare protecting my infra.
And it will not work on mobile if you already use another VPN.
1- In the 90s were security was whatever
2- In modern days as a way to keep your logs squeaky clean ( although you get 99% there with custom ports)
3- As a cute warm up exercise that you code yourself with what's available in your system. (iptables? a couple of python scripts communicating with each other?)
It's not a security mechanism, and downloading external dependencies or code (especially if vibecoded) is a net loss (by a huge margin).
It's also a waste of time to overengineer for the reasons noted above, I've seen supposedly encrypted port knocking implementations. It feels as if someone had a security checklist and then a checklist for that checklist.
But it works very well as an additional layer of security. Sec nerds often scoff at "security through obscurity", but it is a very valid strategy. Running sshd on a random high port is not inherently more secure, but it avoids the vast majority of dumb scanners that spam port 22, which is why all my systems do that. Camouflage is underrated, yet wildly effective. You can see how well it works in nature.
In any case, this is not a port knocking solution anyway, as I mentioned in another comment.
To an untrained eye, the wording here could be construed to imply that this is more secure than a VPN. Might be worth a reword to clarify why one might prefer it want to over a VPN.
I created this because I always have a VPN on my devices, and I can't have tailscale running with that, in addition to tailscale killing my battery life on android.
Though this is not technically a "knocker", but a typical token-based auth gateway. I experimented with something similar recently as well, and think it has its use cases.
But I would agree with some of the comments here. If you need to expose many services to the internet, especially if their protocols are not encrypted, then a tunneling/mesh/overlay network would be a better solution. I was a happy tinc user for several years, and WireGuard now fills that purpose well. As much as people use solutions like Tailscale, ZeroTier, etc., I personally don't trust them, and would prefer to roll my own with WG. It's not that difficult anyway.
There's also Teleport, which is more of an identity-aware proxy, and it worked well last time I tried it, but I wouldn't use it for personal use.
Will go into more details why I created in the blog post coming very soon! Just doing the final touches right now.
Briefly looking at the diagram at the top of the repo, it looks like you "knock" with an API key. Why not just run a reverse proxy in front of (whatever service you're trying to protect) and use the API keys there? To harden further, do some sort of real authentication (PKI, client certs). If you want your logs to look cleaner, install and actually configure fail2ban.
Because it breaks the clients of most homelab services.
That's what authelia does.
Apologies in advance if I'm missing something obvious here, but are you saying an IP allow list is not a standard security practice? If so I'd appreciate further explanation.
With xz backdoor owning ssh, I wouldn’t completely trust ssh public key authentication either.
...now I'll have to make this myself.
TIL that that has a name.[1] All I ever knew it as was "the knock from Roger Rabbit".
Tailscale is just an added unnecessary external dependency layer (& security attack surface) on top of vanilla Wireguard. And in 2025 it's easier to run vanilla Wireguard than it's ever been.
Not only do you need to manually manage the keys for each device and make sure they're present in every other device's configuration, but plain Wireguard also cannot punch through NATs and firewalls without any open ports like Tailscale can, as far as I know.
Combine that with the fact that networking issues can be some of the hardest to diagnose and fix, and something like Tailscale becomes a no-brainer. If you prefer using plain Wireguard instead, that's fine, and I still use it too for some more specific use cases, but trying to argue that Tailscale is entirely unnecessary is just wrong.
I could be wrong, but I think Tailscale just does what you can do on Wireguard, which is `PersistentKeepAlive`. It lets a wireguard client periodically ping another to keep the NAT mapping open.
Tailscale is great if it meets your requirements, & it probably does for most - I wasn't arguing that at all. Only that it won't be an option for everyone: in particular a non-tiny subset of home server hosters.
Everybody's got their own set of beliefs and understandings, and they get to decide how they want their homelab to work.
For me, tailscale fits in just right. Others can come to their own conclusion based on how they feel about networking and points of failure and depency and all that.
The selling point of Tailscale is that they simplify Wireguard UX by adding a proprietary control server - this adds complexity to the stack (extra component) but simplifies user experience (Tailscale run the control server for you).
Headscale seems like it's complicating the stack (adding an extra component) as well as complicating the user experience (you have to maintain two components yourself now instead of just the one Wireguard instance).
Granted I presume the Headscale control server might simplify management of your Wireguard instance but... you're still maintaining the control server yourself.
I know there's plenty of HA integrations that require some cloud service but the core application is very offline-friendly...