Essentially you should always use a domain you control both outside and inside, like a regular gTLD or ccTLD.
Pretty much every single company I've worked for with AD has broken this rule.
Yep. We use two domains - everything on A is "public" and everything on "B" is internal. The root of "B" is a static page on AWS that is gated behind our VPN so it serves as a quick "can you hit our infra at all" check when troubleshooting, and catches the people who are convinced they've enabled their VPN.
One thing to note here that we haven't solved (We've only got two "infra" people, we're a small company) is how you handle an internal portal to an external service. If our public domain is maccard.com and private is maccard.dev, where does internal-admin.maccard.com _actually_ live? Our solve for this is we have an internal.maccard.com for this very specific use case, but I'd much rather it was admin.live.maccard.dev
Is it really going to assign "lancelot.roundtable.local" to my washing machine on a whim, which leaves the microwave unresolvable?
Can't I instruct the mDNS server running on my machine to respond to a particular name ending in .local?
Can't eg. dnsmasq insert itself into a conversation on 224.0.0.251 saying "Let me answer this question" for certain queries?
.test
.example
.invalid
.localhost
RFC2606 (https://datatracker.ietf.org/doc/html/rfc2606)Example is best used in documentation only.
Invalid is weird and confusing.
Localhost as a TLD should still be on the machine itself. Keeping in mind that 127.0.0.1 is not the only loopback address at your disposal – you have the whole /8. You could bind different services on different loopback ip addresses and then assign host names in the .localhost tld to those.
So for example you could run different Wordpress instances locally on your machine on ips 127.87.80.2, 127.87.80.3, and 127.87.80.4, so that each can run on port 80 without colliding with one another, and without resorting to having non-80/non-443 ports.
Then have corresponding entries in /etc/hosts
127.87.80.2 dogblog.localhost
127.87.80.3 cooking.localhost
127.87.80.4 travel.localhost
And use those domains to access each of those from your browser. Then you don’t even need to keep all services behind the same Nginx instance for example, as you otherwise would do if you had different domain names but was using 127.0.0.1 and port 80 for all of them.
Whereas having the localhost tld refer to hosts elsewhere on a physical network.. that’s about equally as weird and confusing as “invalid”.
What I've seen lately is '.int' for internal usage. While this is a valid TLD, it is only for international organizations, and it is not possible for "normal" people to reserve a domain with that TLD, so unless your company is called "WHO" or similar, you shouldn't have any problems...
You should not used them in production.
Naturally, this went terribly as soon any of their sites had their own connection to the internet.
I had another much smaller client who's leadership insisted that because they were using it the owners would have to just have to not use it. Some sort of imagined IP squatters rights, I guess. I doubt the DoD accommodated them.
.local actually exists. It is used for mDNS.
I use this for all my internal domains, haha. I guess this is why we need a proper tld like this.
It was hard to fix because they couldn’t get the spoofed domain, and there were so many copies of bad links everywhere.
You NEED to use https when visiting any .dev domain. Google has put it on the HSTS preload list.
It took me a while to find out why my browser kept redirecting me to https when I wanted to use http (local development). Curl worked fine…
The number of cases where this is actually a legitimate concern, IMO, is extremely small, and I'm personally of the opinion that using Internet-public domains for internal purposes is generally fine. But it's still important to point out that the number of cases is not zero.
But you can still have completely separate DNS for the inside. Using a shared DB for both would probably be recommended to avoid conflicts.
Zone content enumeration in DNSsec was fixed by NSEC3 records (RFC5155, March 2008)
It's a lot better now. Ever since companies started moving from on-prem Exchange to O365 in droves I've noticed that most orgs I work with (painfully) updated their domain so their user principals align w/ their O365 mailbox.
There's only one customer I have that still uses a ".local" domain for AD, and they got bought out last year. (By an org that uses a real FQDN.)
>Register a TLD which is informal standard for development websites and environments (.dev)
>Charge an excessive amount of money for it
>To make sure you ruin everyone's day, put it on a HSTS preload list
>Refuse to elaborate
But I suppose that I still ran the risk that the subdomain could "become real", or draw attention from security admins, or I would change ISPs.
How? I’m not saying this is great practice — there are certainly better options — but no one outside your network will ever know about it. It also won’t matter if you switch ISPs.
[edit] Oh, Active Directory
"Whereas, on 30 July 2014, the ICANN Board New gTLD Program Committee adopted the Name Collision Management Framework. In the Framework, .CORP, .HOME, and .MAIL were noted as high-risk strings whose delegation should be deferred indefinitely".[1]
[1] https://www.icann.org/en/board-activities-and-meetings/mater...It seems that ICANN did consider this choice among others, but reject for the lack of meaningfulness:
> The qualitative assessment on the latter properties was performed in the six United Nations languages (Arabic, Chinese, English, French, Russian and Spanish). [...] Many candidate strings were deemed unsuitable due to their lack of meaningfulness. [...] In this evaluation, only two candidates emerged as broadly meeting the assessment criteria across the assessed languages. These were “INTERNAL” and “PRIVATE”. Some weaknesses were identified for both of these candidates. [...]
I wonder if this means that they only scored the highest among others and all candidate strings were indeed unsuitable, but that they had to pick one anyway. I'm not even sure that laypersons can relate `.internal` with the stuff for "internal" uses.
Isn't it the default domain name for OpenWRT? Just like the default ip addresses are 192.168.*
https://www.w3.org/community/httpslocal/ https://httpslocal.github.io/proposals/ https://httpslocal.github.io/usecases/
I wish some of this work would continue as well.
Yup, same here. Great in combination with ACME DNS-01 so your DNS server can request all those certificates and then push them out to your devices. (Otherwise the hostnames need to be externally accessible, which means either exposing the internal devices, or mucking around with split-view DNS. The former is a terrible idea, the latter is also DNS server complexity and worse than doing DNS-01 IMHO.)
I do appreciate the threat model of one device getting owned leaks all your certs but security is always a trade-off between security and convenience. It also lowers the load upon the LE servers, for what that's worth
Will this be possible with .internal ?
Just 5 letters is less annoying to type repeatedly than .internal, while still conveying the overall purpose relatively well.
It might just be my laziness talking though.
.lan is so "back to office"
Also, since intra is typically used as a prefix, it seems strange to use it bare. People familiar with intranets will probably make (or assume) the connection, but others might find it unclear while "internal" would likely not be.
Even so, I do prefer .intra for selfish reasons.
... possibly a better link.
If you need DNS, register and use a real domain name. Everything else is going to be a hack. Anyone tech-savvy enough to know what an internal, unroutable TLD is, and have a use for one, is going to be just as comfortable and capable of managing a real domain.
I support the idea of something like .internal, but I'm certain it will be made useless for its intended purpose in short order.
Domain names are not slaved to DNS, they predate DNS by over a decade, and there is room for more than one naming system.
If DNS has to surrender one of their precious, precious, TLDs (lord knows there are soooo few to choose from) which, again, existed BEFORE DNS was even a twinkle in Paul Mockapetris' eye, so be it.
Messing up your .internal configuration won’t result in leaking queries to the public.
And maybe secondarily this may encourage tool development for supporting internal names and make it easier for setting up informal or per department configurations.
A quick google did not deliver a decent reserved domain, but multiple people suggested .home
If .localnet ever becomes a real TLD, well, I'm pretty sure the entire global infra is going to collapse and not necessarily be my problem.
Edit: And to be clear, I'm doing this for my house, not some enterprise setup; using real actual FQDN for internal services at a company, especially one that is multi-site/cloud, is still the best advice.
Here are some more details: https://support.microsoft.com/en-us/help/300684/deployment-a... and https://admx.help/?Category=Windows_10_2016&Policy=Microsoft... which does the DNS resolution even for these less than ideal domains.
I think the right solution is that we should require domain registration (google.internal, microsoft.internal, etc.) to avoid these conflicts. A public CA may be able to verify ownership, avoiding the need for private CAs.
I built a service [1] that does this and is compatible with Let's Encrypt. The trick is that I only allow users to set ACME-DNS01 TXT records, not A/AAAA/CNAME records. So you'll still need to run internal DNS for those.
What's different is that the public suffixes I operate cannot publicly host content, which should protect the service from the abuse concerns that plagued Freenom and other free public suffixes. That reduces cost and should keep the site running.
I recommend buying your own domain if you don't mind the cost. A free solution for domains with TLS on internal networks is valuable to many.
https://en.wikipedia.org/wiki/Top-level_domain#Reserved_doma...
https://www.iana.org/assignments/special-use-domain-names/sp...
Would the only difference then be the name ".internal" or is there another difference/advantage versus ".home.arpa"?
Maybe they can reserve both .interNAL and its convenient abbreviation, which just happens to be written backwards because it's DNS.
Seriously, .lan is just a convention. For all I care they could come up with any three-letter thingy, as long as there's a mutual understanding that no global DNS will ever resolve it.
Best to avoid it for that purpose:
1. "Using '.local' as a private top-level domain conflicts with Multicast DNS and may cause problems for users." https://www.rfc-editor.org/rfc/rfc6762#appendix-G
2. ".local has since been designated for use in link-local networking, in applications of multicast DNS (mDNS) and zero-configuration networking (zeroconf) so that DNS service may be established without local installations of conventional DNS infrastructure on local area networks." https://en.wikipedia.org/wiki/.local
3. "PSA: Don't use domain.local" https://old.reddit.com/r/sysadmin/comments/a9sfks/psa_dont_u...
4. "Why using .local as your domain name extension is a BAD idea!" https://community.veeam.com/blogs-and-podcasts-57/why-using-...
For a simple home network setup, as long as naming conflicts can be managed, it looks like mDNS is quite handy.
On a side note, I find .local to be best suited for the purpose, since from the language perspective it's easier on international users than .localhost
The newly proposed .internal comes close, but .local still looks more semantically flexible or maybe this is a cognitive bias of mine.
Microsoft Buys Corp.com So Bad Guys Can’t https://krebsonsecurity.com/2020/04/microsoft-buys-corp-com-...
The only option to somewhat-securely run TLS would be to have the company run their own internal CA, and trust its root certificate on all internal clients.
(to be fair, you generally can't get an .int domain registered. "int is considered to have the strictest application policies of all TLDs, as it implies that the holder is a subject of international law.")
… now that I think about it, "foo.in/ternal" makes so much more sense …
There's an existing TLD that is a string prefix of the new TLD.
Apart from lookalike attacks I'm also wondering if this will do weird things while you type in an address, e.g. if you try to type "foo.internal" and pause for a second after "foo.int"… your application may run off and do lookups or even prefetches.
What you are losing is the modicum of confidence that a website is the real deal because another party (that browser and OS manufacturers trust) took the time to, at the very least, verify ownership of the domain.
“ICANN has picked the TLD string that it will recommend for safe use behind corporate firewalls on the basis that it will never, ever be delegated.
The string is .internal, and the choice is now open for public comment”
Saved you a click :)
Alternatively I use a map file loaded into the memory of a loopback-bound forward proxy. No DNS.
I also use loopback-bound authoritative DNS to a limited extent as it provides wildcards.
There are ways to avoid using DNS.
Most web developers do not understand DNS, or at least dislike it, and some get annoyed by the HOSTS file. Quite funny. But I'm not a developer. DNS is something I understand well enough, I like it, and, in addition, the HOSTS file is useful for me. But sometimes it's most useful for me to avoid DNS.
pdns_recursor can serve /etc/hosts
https://docs.powerdns.com/recursor/settings.html
unbound can serve "local data"
https://unbound.docs.nlnetlabs.nl/en/latest/manpages/unbound...
dnsd from busybox can serve a HOSTS file
(exec 2>/dev/null;while read a b c ;do echo "$b $a";done < /etc/hosts|busybox dnsd -c /dev/stdin -p1153 -i 10.21.66.4)
tinydns would work too echo . > data
while read a b c;do echo =$b:$a:1;done < /etc/hosts >> data
tinydns-data
ROOT=. IP=10.21.66.4 GID=0 UID=0 tinydns
Or just transfer the file instead;rsync/mrsync could be used to keep computer A and computer B's HOSTS files the same
Or ssh to transfer the HOSTS file from A to B.
echo cat /etc/hosts |ssh -T computerB "cat > /etc/hosts"
Or use ssh to query computer A's HOSTS file from computer B echo getent hosts name|ssh -T computerB
Or some small httpd, e.g., darkhttpd httpd /etc --port 1153 --addr 10.21.66.4
tnftp -4o'|grep name' http://10.21.66.4:1153/hosts
It goes on and on.But I'm not interested in using the HOSTS file in this way on the local network. I'm more interested in IP addresses than "domain names". I am not a fan of web browsers; I make HTTP requests from the shell command prompt and from shell scripts. For example, I like to create shortcuts for certain IP addresses so I do not have to type them, e.g., when using netcat. For me, the HOSTS file works perfectly for that purpose. I use this functionality every day.
Not every computer on my local network has the same ability to lookup names and IP addresses. Most have zero access to DNS data. No lookups. Some may only be able to lookup a few remote addresses. I might put those in the computer's HOSTS file.
Domain names are overrated. Web marketing hype. For example, no one uses a domain name to log into their router. But no one at home is getting internet access without typing an IP address at least once to set up a router. If I want to type a short, memorable name instead of an IP number to reach a computer on the local network I can make an entry in /etc/hosts. Using computers that have no /etc/hosts and no control over DNS sucks. Let web developers use those computers.
How many times have I seen developers copy entire portions of RFC 1035 into their code as a "comment". Too many to count. They will always struggle to understand it.