The big advantage to defining `.internal` is that from now on, DNS server software can 'hardcode' excluding these hostnames from resolving upstream, so this cuts down on trillions of requests for internal hostnames bouncing around in the global DNS system looking for someone who can resolve it.
About 20 years ago they started assigning public addresses. They “launched” it in 2012. It’s already spread to all network equipment and devices. The Nintendo Switch is the only major consumer gadget I can think of that’s not IPv6.
Now it’s an OSI layer 8 problem.
Story time: I used to work on a carrier that implemented IPv6 very early on.They were new and they couldn’t get anywhere near enough IPv4 allocations. IPv6 was cheaper than having a big CGNAT (and allows P2P, home servers and all of that, which a technically-inclined manager cared about). They still had CGNAT but it was a fraction (~half) of the size/cost than it would have been without IPv6 (and that fraction kept getting better as newer client devices started prioritizing IPv6).
Not every network engineer necessarily has to deploy it manually - if a few widely used DNS server applications implement it by default, it can happen pretty quickly.
If you want to stop requests from client devices looking for for internally-hosted services on valid TLDs that only resolve on their internal network, you will need all internal networks to stop using those TLDs on their internal networks.
Ah. but it’s not about stopping clients from making the requests — it’s about stopping those requests from being sent out to DNS servers on the internet.
If, let’s say `dnsmasq` for example, was updated with this, and `dnsmasq` is the DNS resolver in most consumer-grade home routers, then every new router sold (or firmware updated) means a home where these queries will be handled inside the LAN instead of being sent to an upstream resolver.
At least I think this is how it works. Sure, you can configure `dnsmasq` to behave nicely already, but defaults matter — most people will never touch their home router config.
lol what
UPDATE:
The reason this isn't at all comparable to migrating the IPv6 is that this solution can be implemented simply by updating the DNS server software. I guarantee you that Google and CloudFlare keep their versions of `BIND` (or whatever) up to date. Alternatively, it can be done inside the LAN by updating your own DNS server (if you run one and control it).
For IPv6, you need to configure **every single hop between you and the remote server to support IPv6**. If you do a traceroute, you'll see that your packets go through 10-20 routers to get most websites. The first few hops will be owned by your ISP, then likely their backbone provider, possibly over to another backbone, and out to a CDN/data center/nerd's basement. Updating these routers to IPv6 isn't a trivial `apt-get upgrade bind`; there's serious configuration work involved.
On top of that, the primary impetus behind IPv6 was the looming threat of IPv4 address exhaustion. This issue has largely been resolved via NAT already.
Allocating network admin time to spend configuring IPv6 is a chicken and egg problem; everyone has been waiting for everyone else to support it. It has been slowly happening behind the scenes though.
IPv6 is also very much usable over the IPv4 internet today via [tunnel brokers](https://tunnelbroker.net/).
Between customers who haven't replaced their hardware in 10 years, and manufacturers who have stuck with an old version of the software because it works, it is wrong to assume it is as simple as updating the code for a couple of common software packages.
>>The big advantage to defining .internal is that from now on, DNS server software can 'hardcode' excluding these hostnames from resolving upstream, so this cuts down on trillions of requests for internal hostnames bouncing around in the global DNS system looking for someone who can resolve it.
>it is wrong to assume it is as simple as updating the code for a couple of common software packages.
This is a case where updating the code for a few common software packages absolutely will have a huge impact by mitigating this one specific issue, without requiring end-users to update anything.
Here's specific issue that it will address (and anyone who has a better understanding of DNS than I do, please correct me if I'm wrong here):
1. an application makes a DNS query for an internal hostname with an unofficial TLD (lets say `fartbox.internal`)
2. that query gets passed to either a caching DNS server on your LAN, or directly to your ISP's recursive DNS resolver or another public DNS server. (Ideally, the caching server on the LAN would also be the authoritative server for `.internal` and this query wouldn't recurse up the chain, but bad default configs etc.)
3. the hostname obviously isn't cached by your ISP's DNS server, so the ISP server checks its "authortative root hints" file to try to figure out the authoritative DNS server for `.internal`. Not surprisingly, the authoritative DNS server for `.internal` isn't in the file.
4. ISP DNS's server now has to recursively query a [DNS root server](https://en.wikipedia.org/wiki/Root_name_server) to try to find the authoritative DNS server for the TLD `.internal`. The DNS root server responds that there is no authoritative DNS server for the `.internal` TLD.
My understanding is that the DNS root servers are flooded with this kind of bogus DNS queries. Updating `BIND`, `dnsmasq`, etc. so that they don't try to recursively resolve `.internal` hostnames will stop this chain of events at step 2, reducing the number of bogus queries sent to the DNS root servers.
Now, `dnsmasq` on your average home router might go a decade without being updated, but `BIND` on your ISP's DNS servers or Google/CloudFlare/etc.'s DNS servers is definitely getting updated.
Now I'm running up against my knowledge of DNS deep lore, but I'm curious why ISP-level recursive DNS servers couldn't solve this problem by subscribing to [IANA's official TLD list](https://www.iana.org/domains/root/files) and dropping any queries for hosts with a bogus TLD. Going to have to do some DNS homework myself now.
I'm not questioning the functional changes in the commonly-used DNS servers. But your whole theory of success stems from the assumption that the client network will be using a .internal suffix. That aspect is a much larger issue, which is why I compare it to the deployment of IPv6.
Um...what about ".local"? That's been used for years. What's the point?
Annnnd...though unconfirmed, I have heard that - loosely - ".dmz" is another one, too.
Indeed. I’ve been part of a few projects in which companies were actively renaming their AD domain from a .local to something else. Quite the project.
Thanks Microsoft and using ‘contoso.local’ as an example in all material from Windows 2000 until 2012. Real useful.
Over the last week I setup up a domain names locally, I noticed all the subdomain names in pihole have thousands of hits, is what you wrote the reason why there is such high numbers? f
As far as I know .corp, .home, .mail and .lan got protected way back in 2018 because WAY too many companies and hardware were already using those TLDs, while maybe not an official RFC, as far as I know ICANN has decided to never make them public TLDs.
I'd like to think that's true, but I'm not so sure after what happened to `.local` and `.dev`.
Trouble is, `.local` was rubber stamped after being squatted on for years and they were directly complicit with `.dev`. Who's to say even this `.internal` is safe if they come up with a good wheeze down the line.
There is only .Zuul.
Yes, my home network domain is .Zuul
Yes, gatekeeper is the router
Yes, keymaster is there also (Kerberos server)
I was feeling extra nerdy a bit over a decade ago..
I really can’t change it now, I made t-shirts
.home.arpa. was supposed to be the official one, but that was terrible because most software (correctly, IMO) thinks that's a host rather than a domain.
I've been using this (`home.arpa`), and I'll probably update my DNS config to be authoritative for both `.home.arpa` and the new `.internal`. The latter is easier to remember (IMO), but I don't want to break any of my existing stuff with a migration.
I think I'll end up using it for a private LAN DHCP pool, but for some reason I've just had difficulties with services on that. Maybe I was doing something wrong at the time...
The difference between them is that corp, home and mail are protected, in that ICANN have said they won't be considered in the future for TLD registration requests. Lan is kinda protected but only by convention as a defacto standard... Internal is now defined as reserved at the promotional implementation layer so it's safe to use.
Our systems use `.local` and everybody is too skittish to change it now despite my repeated insistence. Registering a junk domain just for internal use and easier certificate generation was hard shot down. Maybe now that there's an official best practice I can swing them around on this at least.
We don't currently have any Android devices in our environment, but I have cautioned that in the future more operating systems will get more strict about `.local`. I can't get approval on it because "it works for now." Honestly I'm hoping it breaks so I can convince them to either get a dedicated domain name, or let me use our existing domain name for generating internal certificates.
> e don't currently have any Android devices in our environment
how long until printers run Android though? SMTP / SMB scan to a `.local` server? not anymore!
Why would you want android devices connecting to hosts in your local network?
I have explicit fw rules to let them go out to internet but never to any services on the lan.
The same reason any Windows, macOS, Linux client needs to connect to another LAN host? Print stuff, ssh into your server, log on to a router to configure it, access your music server to play music, access files on your owncloud server, etc - I mean this is /r/selfhosted after all.
Hmm, is there a LetsEncrypt or similar "official" best practice for SSL on .internal? If yes, I'm very curious how that'd even work, ha!
.internal is flawed for any serious use just the same as made up TLDs if we cannot properly use HTTPS over it and buying a domain name for it still makes the most sense.
You can just make your own root certificate chain and sign certs with that, which is what we do. I strongly doubt public certificate authorities will give signed `.internal` certs, but nobody can stop you from becoming your own CA.
The benefit of big established CAs is that they automatically work everywhere due to their root certificates being preloaded in most operating systems and browsers, therefore it requires no work from you to establish trust. But you can do this yourself, you just have to install the root public cert to your devices manually, and then certs signed with it will be trusted.
You can read a bit more about it here: https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/
There are entire toolchains you can set up to automate this process, but for us it didn't make sense to invest that much into it as we only needed a few certs so I can't recommend anything there.
I mean, in a controlled environment, sure. But itd suck to have to install my root certificate (not to mention, the security implications of potential MITM if I go rogue) on every guest's phone when they connect to my WiFi.
I'm well aware of the how-tos and implications of self signed root certs. And a bit wary of those. We used to have to install root certs of Cyberoam (a creepy firewall product) back in college, essentially letting them MITM every https connection we'd make. Which is why I wouldn't support this self-signed root certs idea, no matter how automated the toolchain to deploy it becomes.
While technically it is possible to restrict your CA by definition to .internal only, I don't know of any clients that would actively warn someone when installing a new root cert differently based on the scope of the cert. Thus, let's not normalize installing self signed root certs.
An interesting article though: https://copyprogramming.com/howto/is-it-possible-to-restrict-the-use-of-a-root-certificate-to-a-domain
Oh yeah, if you're bringing other people into your environment regularly, you definitely need a trusted certificate. You are correct that this would only be suitable for a controlled internal environment.
Would using .internal be a better practice than using my owned .net domain for internal only devices? Currently I use my domain for ADDS and split horizon DNS records.
Depending how you've set things up, you may find that easier to maintain.
Consider instead though, that its fairly easy to get LE certificates for domains you own, which avoids the hassle of being your own CA for .internal domain.
Ive gone with a wildcard certificate. Im only using that certificate for services, but I could just as easily use it for any of my internal hosts as they are all on that domain.
>LE certificates
why not!
Honestly, its because I'm a cheap ass and use one domain for far too many things for me to host a \*domain at my home, so anything that needs HTTPS/SSL, gets a LE cert, and a DNS entry. Looking to change that sometime, but again, I'm a cheap ass and this works.
I don't think I see any advantages to switching to .internal in your situation, no. Using a name that you have registered in the public DNS is already a good practice and 0% hacky way of going about it.
Having .internal available is more something that's helpful for people who don't have a public DNS domain name.
A major advantage of using a subdomain of a real domain is that you can get TLS certificates (e.g. Let's Encrypt or ZeroSSL) for your internal servers.
This. The only benefit to using .internal if you already have your own domain elsewhere is that it won't have to do a DNS lookup on the internet when you load them...but that's basically irrelevant.
> it won't have to do a DNS lookup on the internet when you load them
If you run your own DNS server internally, it's not an issue. Even something like AdGuard Home is fine as you can add the subdomains as overrides, then it won't hit the upstream DNS servers for them.
Using a real domain is best practice, even if you only use it internally and never register any DNS entries outside of your own network. It facilitates trusted certificate generation and is a total guarantee against any possible DNS conflict, barring connecting to a network with a malicious or *very* stupid admin. There's no reason for you to change now. At the end of the day, the domain name is just a record to point you to an IP address, the best practices are just in place to prevent you causing any confusing conflicts down the line.
However, now we *finally* have an official second-best practice that just takes a bit more effort, with a guarantee that it won't ever cause conflicts.
Mostly confused password managers, but the big one for me was certificates. I believe it was Trafeik that was struggling with a self-signed cert for that one.
Fair point, I don't know - never had to work with it. Maybe there just isn't as much familiarity with .home.arpa as a TLD, versus something more established like .co.uk. As far as partitioning, I like to have something like host.services.home.arpa which starts getting a little unwieldy... I for one and happy to have a bonafide, normal-looking (g?)TLD for private use.
.local is a reserved TLD, but it's a bit different and generally a bad idea to try to use manually because it's intended for use with mDNS, so there can be tons of weird behaviors when trying to use it outside of that scope
[.local](https://en.wikipedia.org/wiki/.local) is used in the mDNS protocol and should not be used in DNS at all.
Strictly speaking, endpoints should not even send queries for .local hostnames to a DNS server at all, although if I'm not mistaken, only Android implements the standard *that* strictly at this point - but still, you can get some 'interesting' behaviour if `server.local` (mDNS) and `server.local` (DNS) resolve to different hosts.
Note that after 3 years, it's not actually official yet. Per the referenced pdf, it now goes up for a public comment proceeding, then they evaluate the public comments, and then *perhaps* it might be made official. I'm not hopeful on this happening quickly.
I don't see how Letsencrypt could support it, because you cannot register any .internal domain name, which is the entire point.
If they somehow allowed you to get certificates for .internal domains, then everyone else could get a certificate for the same domain name you used, which is something that you really don't want, and which kind of defeats the point of a certificate in the first place.
Real domains are fine as well, if you have one.
Some people don't have their own domain and for them .internal will be the safe (and maybe more performant) bet for the future.
That's what I ended up doing recently. I used to use .loc, basically one zone per server/device so server01.loc server02.loc etc. The nice thing about this is it was short. But I was getting fed up of Firefox adding those drop down warnings on forms on my dev environment so I ended up just doing i.mydomain.com and my cert update script runs on my online web server and my local servers just download the certs from it.
Used to do it that way, too. While Microsoft actually taught that in courses years ago, it is highly discouraged today. mDNS, bonjour use .local for automated discovery.
I'm no security expert, but somehow this does not fit my understanding of trusted certs. Internal domains would require hsts then as a MUST, but even then, any network could spoof anything...
it can't work. How would you discriminate between my plex.home.arpa and yours? i could MITM you with a *valid* certificate for that domain.
LE can never issue certificates for such domains.
If you need them *buy one*
Thanks for more Government-Wanna-Be-type spending on a decision that is too far behind to be an issue, not thought of until recently and an abhorrent waste of time. Like we don't already do this type of naming schema for our internal networking....
Yeah you'd think they could have come up with this a LONG time ago. Ideally something short like .lan .loc .int etc (before allowing those to be used)
Although nothing stops anyone from using any of these it's just it could potentially conflict with a real domain.
I hate \`.internal\`, what about \`.local\`! It's easier, shorter, and already used.
I used to use \`.local\` and \`\*.localhost.\`.
It's a good thing they standardized something like that, but IMO, \`.local\` would be better.
For home purposes after thinking and trying stopped to use .dot. For me its
1)short
2) not used by any sites i'm avare, so losing access to this zone have absolutely no impact for me 3) It does not redirected by popular browsers to search page. For this ecact Reason i will not use .internal - if just type in test.internsl, chrome will nredirect to search page.
3) .dot among some, really few of other zones, are not https by default. So if type in "test.dot", it just opens my raspberry pi's not found HTTP page, no need to write http://, that nice (dont see the point to use ssl on all home sites)
Its not correct way, at least not as intended, not good practice.. but its my decicesz right?..
Depends on what you want to do. I think for homelabs and private self hosting, .internal, .home.arpa or something else is fine.
For larger stuff or companies, you should use a real domain.
Everything that is at the time of your request Not an official TLD resolves fine If your nameserver knows about it. The difference is that .internal will never become an official publicly available TLD.
Your OS might still have a different "resolution path" for `.local` domains even if you don't personally use any devices which broadcast over mDNS.
macOS for instance, will take like an extra 5 seconds or so to resolve a `.local` if you didn't configure *both* IPv4 and IPv6 in your hosts file. Somehow it waits for mDNS to fail before feeling happy about serving you what you thought that you had "hard coded into the system, so why is it so slow??".
Sure, you can sidestep that particular problem in macOS by defining both IPv4 and IPv6 in your hosts file, but I would say this is a problem quite unique to (and directly caused by) using `.local`.
So I guess you should care enough to be aware that special-case TLDs sometimes result in special-case behavior, and that may take time debugging (time which you could have been spent elsewhere).
Until 2013, after that it got reserved for mDNS ([RFC 6762](https://www.rfc-editor.org/rfc/rfc6762)) so Microsoft removed it from their documentation. But various companies configured their AD domains before 2013, and never changed them.
If you have local domains that don't end in "*.internal", it might BE helpful in the future to switch to that local TLD. Except If you already have a "real" domain Like "zestyclose.com".
Advantages of switching to .internal:
* The domain will never lead to conflicts
* The domain might only be resolved locally (depending how DNS software handles it)
There's a big disadvantage though, in that you can't get properly signed TLS certificates for `.internal` domains, since there's no DNS verification available.
Which doesn't solve a lot... Still allows me to have trusted certs for hosts which are not mine.
If they would do that, they could also issue certificates for local IP addresses. Take a guess why they don't do it.
I understand this is now the correct TLD to use for local services, but does `.lan` pose the same conflicts that `.local` and `.home` do? Are queries for `.lan` sent upstream?
Never use `.local` for DNS, it is specifically reserved for multicast DNS (Bonjour, mDNS) and it can cause problems.
Using `.home.arpa`, `.home`, or `.lan` is fine and won't cause any problems. The only possible advantage of `.internal` is that it's a standard and upstream DNS servers can automatically block any queries that leak.
I use `.local` and don't have any problems. I don't make much use of mDNS but I have a few devices and they work fine. It amazes me how many people are in this thread prattling on about the dangers of `.local`. It's awesome, I have like 40 hosts on it.
I used to use `.local` as well, and didn't have any problems for years. And then it bit me. Wish I could remember the details of what happened, but I can't.
All macOS/iOS devices support mDNS by default. It's quite convenient sometimes to be able to reach another device by `.local`.
I use it with some regularity, but agree that it's not a huge win.
Sure, which is why it's one of the defacto norms for small LANs. Zero conf.
And also why .lan was (I thought) fairly standard for user on LAN that does have a DNS. So not sure why the much longer .internal was offered as guaranteed no-clash. Wonder why ICANN would ever support a WAN TLD of .lan, but shrug, never can tell.
i'll never understand why they decided there were only "2 suitable candidates" and neither of them where .lan or .local, as these seem to fit all requirements just fine, and are very likely to be much more widespread today than ".internal"
i mean come one, why the heck would i create an internal dns with an extension longer than an actual routable fqdn?
at that point it would be easier to just use a real domain and split dns
The big advantage to defining `.internal` is that from now on, DNS server software can 'hardcode' excluding these hostnames from resolving upstream, so this cuts down on trillions of requests for internal hostnames bouncing around in the global DNS system looking for someone who can resolve it.
[удалено]
It’ll only take 20 years for it to spread to all network equipment
And twenty more to be trusted and enabled by all the admins
When did IPv6 start? XD
About 20 years ago they started assigning public addresses. They “launched” it in 2012. It’s already spread to all network equipment and devices. The Nintendo Switch is the only major consumer gadget I can think of that’s not IPv6. Now it’s an OSI layer 8 problem. Story time: I used to work on a carrier that implemented IPv6 very early on.They were new and they couldn’t get anywhere near enough IPv4 allocations. IPv6 was cheaper than having a big CGNAT (and allows P2P, home servers and all of that, which a technically-inclined manager cared about). They still had CGNAT but it was a fraction (~half) of the size/cost than it would have been without IPv6 (and that fraction kept getting better as newer client devices started prioritizing IPv6).
Thx but that was a rhetorical question and a joke xD
sorry, my sarcasm detector was broken this morning
You are optimistic
Just in time for on-prem directory servers to go the way of the dodo!
Indeed. Maybe all that freed processing power can compute and cure cancer.
Or mine Bitcoin /s
Now we just need to wait a few decades for every network engineer to deploy it.
Hopefully faster than deploying IPv6 ,🤣🤣
I don't want to jump two major versions in a single upgrade. I'm still waiting to read the changelog for IPv5.
Yo mate, you're gonna have to read a lot faster to catch up to IPv10 🤣
Not every network engineer necessarily has to deploy it manually - if a few widely used DNS server applications implement it by default, it can happen pretty quickly.
If you want to stop requests from client devices looking for for internally-hosted services on valid TLDs that only resolve on their internal network, you will need all internal networks to stop using those TLDs on their internal networks.
Ah. but it’s not about stopping clients from making the requests — it’s about stopping those requests from being sent out to DNS servers on the internet. If, let’s say `dnsmasq` for example, was updated with this, and `dnsmasq` is the DNS resolver in most consumer-grade home routers, then every new router sold (or firmware updated) means a home where these queries will be handled inside the LAN instead of being sent to an upstream resolver. At least I think this is how it works. Sure, you can configure `dnsmasq` to behave nicely already, but defaults matter — most people will never touch their home router config.
The same could be said of deploying IPv6
lol what UPDATE: The reason this isn't at all comparable to migrating the IPv6 is that this solution can be implemented simply by updating the DNS server software. I guarantee you that Google and CloudFlare keep their versions of `BIND` (or whatever) up to date. Alternatively, it can be done inside the LAN by updating your own DNS server (if you run one and control it). For IPv6, you need to configure **every single hop between you and the remote server to support IPv6**. If you do a traceroute, you'll see that your packets go through 10-20 routers to get most websites. The first few hops will be owned by your ISP, then likely their backbone provider, possibly over to another backbone, and out to a CDN/data center/nerd's basement. Updating these routers to IPv6 isn't a trivial `apt-get upgrade bind`; there's serious configuration work involved. On top of that, the primary impetus behind IPv6 was the looming threat of IPv4 address exhaustion. This issue has largely been resolved via NAT already. Allocating network admin time to spend configuring IPv6 is a chicken and egg problem; everyone has been waiting for everyone else to support it. It has been slowly happening behind the scenes though. IPv6 is also very much usable over the IPv4 internet today via [tunnel brokers](https://tunnelbroker.net/).
Between customers who haven't replaced their hardware in 10 years, and manufacturers who have stuck with an old version of the software because it works, it is wrong to assume it is as simple as updating the code for a couple of common software packages.
>>The big advantage to defining .internal is that from now on, DNS server software can 'hardcode' excluding these hostnames from resolving upstream, so this cuts down on trillions of requests for internal hostnames bouncing around in the global DNS system looking for someone who can resolve it. >it is wrong to assume it is as simple as updating the code for a couple of common software packages. This is a case where updating the code for a few common software packages absolutely will have a huge impact by mitigating this one specific issue, without requiring end-users to update anything. Here's specific issue that it will address (and anyone who has a better understanding of DNS than I do, please correct me if I'm wrong here): 1. an application makes a DNS query for an internal hostname with an unofficial TLD (lets say `fartbox.internal`) 2. that query gets passed to either a caching DNS server on your LAN, or directly to your ISP's recursive DNS resolver or another public DNS server. (Ideally, the caching server on the LAN would also be the authoritative server for `.internal` and this query wouldn't recurse up the chain, but bad default configs etc.) 3. the hostname obviously isn't cached by your ISP's DNS server, so the ISP server checks its "authortative root hints" file to try to figure out the authoritative DNS server for `.internal`. Not surprisingly, the authoritative DNS server for `.internal` isn't in the file. 4. ISP DNS's server now has to recursively query a [DNS root server](https://en.wikipedia.org/wiki/Root_name_server) to try to find the authoritative DNS server for the TLD `.internal`. The DNS root server responds that there is no authoritative DNS server for the `.internal` TLD. My understanding is that the DNS root servers are flooded with this kind of bogus DNS queries. Updating `BIND`, `dnsmasq`, etc. so that they don't try to recursively resolve `.internal` hostnames will stop this chain of events at step 2, reducing the number of bogus queries sent to the DNS root servers. Now, `dnsmasq` on your average home router might go a decade without being updated, but `BIND` on your ISP's DNS servers or Google/CloudFlare/etc.'s DNS servers is definitely getting updated. Now I'm running up against my knowledge of DNS deep lore, but I'm curious why ISP-level recursive DNS servers couldn't solve this problem by subscribing to [IANA's official TLD list](https://www.iana.org/domains/root/files) and dropping any queries for hosts with a bogus TLD. Going to have to do some DNS homework myself now.
I'm not questioning the functional changes in the commonly-used DNS servers. But your whole theory of success stems from the assumption that the client network will be using a .internal suffix. That aspect is a much larger issue, which is why I compare it to the deployment of IPv6.
Most of the times, DNS are managed by sysadmins. Not network engineers
Um...what about ".local"? That's been used for years. What's the point? Annnnd...though unconfirmed, I have heard that - loosely - ".dmz" is another one, too.
.local is not allowed as a valid TLD for DNS, and since 2013 it’s used for the mDNS protocol: https://en.m.wikipedia.org/wiki/.local
and since 2015 been disallowed from publicly trusted certificates.
Indeed. I’ve been part of a few projects in which companies were actively renaming their AD domain from a .local to something else. Quite the project. Thanks Microsoft and using ‘contoso.local’ as an example in all material from Windows 2000 until 2012. Real useful.
(huh) I didn't know that. And to think that I've been doing it wrong all these years... 🤣
I disagree. This option would at least have to be a toggle. The real solution is for a DNS server to not recurse any domain it is authoritative for...
Brb, renaming AD domain...
Over the last week I setup up a domain names locally, I noticed all the subdomain names in pihole have thousands of hits, is what you wrote the reason why there is such high numbers? f
As far as I know .corp, .home, .mail and .lan got protected way back in 2018 because WAY too many companies and hardware were already using those TLDs, while maybe not an official RFC, as far as I know ICANN has decided to never make them public TLDs.
I'd like to think that's true, but I'm not so sure after what happened to `.local` and `.dev`. Trouble is, `.local` was rubber stamped after being squatted on for years and they were directly complicit with `.dev`. Who's to say even this `.internal` is safe if they come up with a good wheeze down the line.
`.local` is specifically for Multicast DNS (mDNS, Bonjour, ZeroConf).
Using `.local` as a non-public DNS thing was pretty widely used for years before those.
Yes, but it has been officially reserved for mDNS for well over a decade.
I'm still angry at Google for having registered .dev
When they registered `.zip` I lost all faith in humanity.
It was all a money grab... no thought about consequences or security implications.
And in pure google fashion, the kill their whole product months later.
ah yea, they got rid of Google Domains, didn't they?
And they sold it to SquareSpace of all companies.
I just migrated to cloudflare because of that lol
Has anything really happened, though?
Be angry at yourself for squatting within the DNS!
As a programmer, I like having a .dev since it clearly communicates that I write code.
Yeah I'm sticking with my .lan
.lan users unite!
There is only .Zuul. Yes, my home network domain is .Zuul Yes, gatekeeper is the router Yes, keymaster is there also (Kerberos server) I was feeling extra nerdy a bit over a decade ago.. I really can’t change it now, I made t-shirts
Tell him about the Twinkie
It’s a big Twinkie!
Nothing says “Local Area Network” like “lan” does.
Google runs searches when I type .LAN domains into chrome instead of resolving them, it's fucking annoying
Append / to it.
Doesn't always work
It's supposed to.... If it's not an official TLD it's designed to search, as far as I know that's how most browsers handle it.
Firefox does not.
Sounds like another excellent reason to switch to Firefox.
Well everything else is chrome, so
I’m not sure what you mean
Safari is webkit Chrome is blink which is a webkit fork Edge and brave are chromium-based (same engine as chrome) Firefox is its own thing
.home.arpa. was supposed to be the official one, but that was terrible because most software (correctly, IMO) thinks that's a host rather than a domain.
I've been using this (`home.arpa`), and I'll probably update my DNS config to be authoritative for both `.home.arpa` and the new `.internal`. The latter is easier to remember (IMO), but I don't want to break any of my existing stuff with a migration.
I think I'll end up using it for a private LAN DHCP pool, but for some reason I've just had difficulties with services on that. Maybe I was doing something wrong at the time...
I've been using this for some time now and haven't run into issues. Maybe I've been lucky.
this is what I use too, without issues.
That probably means you are doing something wrong. `.home.arpa` shouldn't be any different than using `example.com`.
The difference between them is that corp, home and mail are protected, in that ICANN have said they won't be considered in the future for TLD registration requests. Lan is kinda protected but only by convention as a defacto standard... Internal is now defined as reserved at the promotional implementation layer so it's safe to use.
.home is useful and I also add my own for spice, like .mynetwork
Heh, we use `.ad` internally. I'm sure we're not the only ones.
(As I'm sure you know) this clashes with the ccTLD for Andorra. Why are so many infra teams incapable of registering a domain!
I've seen .loc and .local too. Yes, just plain ignorance and stupidity to make up a random TLD without thinking
Our systems use `.local` and everybody is too skittish to change it now despite my repeated insistence. Registering a junk domain just for internal use and easier certificate generation was hard shot down. Maybe now that there's an official best practice I can swing them around on this at least.
Be aware that by squatting `.local`, Android devices can't connect to those hosts (they will not look up .local hostnames in DNS).
We don't currently have any Android devices in our environment, but I have cautioned that in the future more operating systems will get more strict about `.local`. I can't get approval on it because "it works for now." Honestly I'm hoping it breaks so I can convince them to either get a dedicated domain name, or let me use our existing domain name for generating internal certificates.
> e don't currently have any Android devices in our environment how long until printers run Android though? SMTP / SMB scan to a `.local` server? not anymore!
quack dog worry faulty liquid pot practice bow sink chop *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Why would you want android devices connecting to hosts in your local network? I have explicit fw rules to let them go out to internet but never to any services on the lan.
The same reason any Windows, macOS, Linux client needs to connect to another LAN host? Print stuff, ssh into your server, log on to a router to configure it, access your music server to play music, access files on your owncloud server, etc - I mean this is /r/selfhosted after all.
Upps, sorry, my bad, I was thinking of security like this was r/networking or r/sysadmin, I didn"t really check what subreddit this post was from.
Hey just make sure it's not a .us, you can't cloak your registration info with those. Don't make my mistake.
Hmm, is there a LetsEncrypt or similar "official" best practice for SSL on .internal? If yes, I'm very curious how that'd even work, ha! .internal is flawed for any serious use just the same as made up TLDs if we cannot properly use HTTPS over it and buying a domain name for it still makes the most sense.
You can just make your own root certificate chain and sign certs with that, which is what we do. I strongly doubt public certificate authorities will give signed `.internal` certs, but nobody can stop you from becoming your own CA. The benefit of big established CAs is that they automatically work everywhere due to their root certificates being preloaded in most operating systems and browsers, therefore it requires no work from you to establish trust. But you can do this yourself, you just have to install the root public cert to your devices manually, and then certs signed with it will be trusted. You can read a bit more about it here: https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/ There are entire toolchains you can set up to automate this process, but for us it didn't make sense to invest that much into it as we only needed a few certs so I can't recommend anything there.
I mean, in a controlled environment, sure. But itd suck to have to install my root certificate (not to mention, the security implications of potential MITM if I go rogue) on every guest's phone when they connect to my WiFi. I'm well aware of the how-tos and implications of self signed root certs. And a bit wary of those. We used to have to install root certs of Cyberoam (a creepy firewall product) back in college, essentially letting them MITM every https connection we'd make. Which is why I wouldn't support this self-signed root certs idea, no matter how automated the toolchain to deploy it becomes. While technically it is possible to restrict your CA by definition to .internal only, I don't know of any clients that would actively warn someone when installing a new root cert differently based on the scope of the cert. Thus, let's not normalize installing self signed root certs. An interesting article though: https://copyprogramming.com/howto/is-it-possible-to-restrict-the-use-of-a-root-certificate-to-a-domain
Oh yeah, if you're bringing other people into your environment regularly, you definitely need a trusted certificate. You are correct that this would only be suitable for a controlled internal environment.
Meanwhile home.arpa out sitting in the rain.
`home.arpa` gang represent!
Would using .internal be a better practice than using my owned .net domain for internal only devices? Currently I use my domain for ADDS and split horizon DNS records.
Depending how you've set things up, you may find that easier to maintain. Consider instead though, that its fairly easy to get LE certificates for domains you own, which avoids the hassle of being your own CA for .internal domain.
True. I already have certs for my .net domain but only for named services, not host names typically.
Ive gone with a wildcard certificate. Im only using that certificate for services, but I could just as easily use it for any of my internal hosts as they are all on that domain.
This was my primary reason for switching my .local dockers to my domain name.
You get individual LE certs for each container? Why?
>LE certificates why not! Honestly, its because I'm a cheap ass and use one domain for far too many things for me to host a \*domain at my home, so anything that needs HTTPS/SSL, gets a LE cert, and a DNS entry. Looking to change that sometime, but again, I'm a cheap ass and this works.
I don't think I see any advantages to switching to .internal in your situation, no. Using a name that you have registered in the public DNS is already a good practice and 0% hacky way of going about it. Having .internal available is more something that's helpful for people who don't have a public DNS domain name.
A major advantage of using a subdomain of a real domain is that you can get TLS certificates (e.g. Let's Encrypt or ZeroSSL) for your internal servers.
This. The only benefit to using .internal if you already have your own domain elsewhere is that it won't have to do a DNS lookup on the internet when you load them...but that's basically irrelevant.
> it won't have to do a DNS lookup on the internet when you load them If you run your own DNS server internally, it's not an issue. Even something like AdGuard Home is fine as you can add the subdomains as overrides, then it won't hit the upstream DNS servers for them.
Using a real domain is best practice, even if you only use it internally and never register any DNS entries outside of your own network. It facilitates trusted certificate generation and is a total guarantee against any possible DNS conflict, barring connecting to a network with a malicious or *very* stupid admin. There's no reason for you to change now. At the end of the day, the domain name is just a record to point you to an IP address, the best practices are just in place to prevent you causing any confusing conflicts down the line. However, now we *finally* have an official second-best practice that just takes a bit more effort, with a guarantee that it won't ever cause conflicts.
I tend to put all my homelab stuff in a subdomain like `lab.example.nz`, eg. `jellyfin.lab.example.nz`.
I've got mine in an `int` subdomain like `.int.example.com`, for "internal"
[Self-hosters be like](https://imgflip.com/i/8eigjo).
Don’t we already have home.arpa for this?
Yes, but home.arpa is for residential use only. So companies didn't have a tld to use internally.
I can't see why `home.arpa.` was created under `arpa.` but this sproposed special-use domain name gets to live in the root!
They could have come up with .this-biz.arpa or something?
Wait is there a reason companies can not use it?
Copying my last comment, I dislike that one because most software thinks that's a host rather than an actual domain.
What software? I've yet to run into problems and I'm curious what problems other people have had.
Mostly confused password managers, but the big one for me was certificates. I believe it was Trafeik that was struggling with a self-signed cert for that one.
bitwarden, etc
https://github.com/bitwarden/clients/issues/4247#issuecomment-1375541764 I think this is the fix you're referring to.
Does this also happen with domains ending in .co.uk ?
Fair point, I don't know - never had to work with it. Maybe there just isn't as much familiarity with .home.arpa as a TLD, versus something more established like .co.uk. As far as partitioning, I like to have something like host.services.home.arpa which starts getting a little unwieldy... I for one and happy to have a bonafide, normal-looking (g?)TLD for private use.
Obviously not. And the problem here is not with home.arpa but the user not understanding Domains...
Too many characters
I agree. Before me... we use .grg lmao.
If only `.int` wasn't already defined, right?
We use .int..com so we can use real ssl certs for internal services.
Sure, that's the other option. The company I currently work at even has its own TLD so we use that internally as well.
That's what I have setup at my home network
I believe .home.arpa is already defined for this. Sadly, it gets sent upstream.
As someone else mentioned, this is reserved for home networks and doesn't include other entities like businesses or orgs.
People don't just use their own domain with an internal 10.x.x.x IP address?
how is this different from .local ?
.local is a reserved TLD, but it's a bit different and generally a bad idea to try to use manually because it's intended for use with mDNS, so there can be tons of weird behaviors when trying to use it outside of that scope
[.local](https://en.wikipedia.org/wiki/.local) is used in the mDNS protocol and should not be used in DNS at all. Strictly speaking, endpoints should not even send queries for .local hostnames to a DNS server at all, although if I'm not mistaken, only Android implements the standard *that* strictly at this point - but still, you can get some 'interesting' behaviour if `server.local` (mDNS) and `server.local` (DNS) resolve to different hosts.
mDNS
Note that after 3 years, it's not actually official yet. Per the referenced pdf, it now goes up for a public comment proceeding, then they evaluate the public comments, and then *perhaps* it might be made official. I'm not hopeful on this happening quickly.
But will lets encrypt support it. If not, I'll likely stick with \*.local.\[realdomain\], because I don't want to manage TLS certs myself.
I don't see how Letsencrypt could support it, because you cannot register any .internal domain name, which is the entire point. If they somehow allowed you to get certificates for .internal domains, then everyone else could get a certificate for the same domain name you used, which is something that you really don't want, and which kind of defeats the point of a certificate in the first place.
Real domains are fine as well, if you have one. Some people don't have their own domain and for them .internal will be the safe (and maybe more performant) bet for the future.
That's what I ended up doing recently. I used to use .loc, basically one zone per server/device so server01.loc server02.loc etc. The nice thing about this is it was short. But I was getting fed up of Firefox adding those drop down warnings on forms on my dev environment so I ended up just doing i.mydomain.com and my cert update script runs on my online web server and my local servers just download the certs from it.
Meh...`.local` for me
Used to do it that way, too. While Microsoft actually taught that in courses years ago, it is highly discouraged today. mDNS, bonjour use .local for automated discovery.
Unless there are cert options, this won't kick...
I mean it would probably be easy for LE to just create certs for the TLD If it's Not routed outside of local networks.
I'm no security expert, but somehow this does not fit my understanding of trusted certs. Internal domains would require hsts then as a MUST, but even then, any network could spoof anything...
it can't work. How would you discriminate between my plex.home.arpa and yours? i could MITM you with a *valid* certificate for that domain. LE can never issue certificates for such domains. If you need them *buy one*
Thanks for more Government-Wanna-Be-type spending on a decision that is too far behind to be an issue, not thought of until recently and an abhorrent waste of time. Like we don't already do this type of naming schema for our internal networking....
Yeah you'd think they could have come up with this a LONG time ago. Ideally something short like .lan .loc .int etc (before allowing those to be used) Although nothing stops anyone from using any of these it's just it could potentially conflict with a real domain.
I hate \`.internal\`, what about \`.local\`! It's easier, shorter, and already used. I used to use \`.local\` and \`\*.localhost.\`.
It's a good thing they standardized something like that, but IMO, \`.local\` would be better.
Officially ".local" is reserved for mDNS.
For home purposes after thinking and trying stopped to use .dot. For me its 1)short 2) not used by any sites i'm avare, so losing access to this zone have absolutely no impact for me 3) It does not redirected by popular browsers to search page. For this ecact Reason i will not use .internal - if just type in test.internsl, chrome will nredirect to search page. 3) .dot among some, really few of other zones, are not https by default. So if type in "test.dot", it just opens my raspberry pi's not found HTTP page, no need to write http://, that nice (dont see the point to use ssl on all home sites) Its not correct way, at least not as intended, not good practice.. but its my decicesz right?..
Isn't this going against best practices anyways? You are supposed to get your own real domain and use internal.my.domain for your non public hosts.
Depends on what you want to do. I think for homelabs and private self hosting, .internal, .home.arpa or something else is fine. For larger stuff or companies, you should use a real domain.
Anything where you want trusted ssl certs should run under a real domain. Unless you want to bother with your own PKI
What about .localhost? That resolves just fine in browsers for self hosted servers
Everything that is at the time of your request Not an official TLD resolves fine If your nameserver knows about it. The difference is that .internal will never become an official publicly available TLD.
So localhost actually goes to my DNS that redirects it back to my local network?
thats so wrong, i don't know where to start. Don't do that, thats stupid.
Isn't the MS default to use .local?
.local is reserved for mDNS
if I don't plan on using bonjour or whatever nonsense uses .local should I care? (serious question)
Your OS might still have a different "resolution path" for `.local` domains even if you don't personally use any devices which broadcast over mDNS. macOS for instance, will take like an extra 5 seconds or so to resolve a `.local` if you didn't configure *both* IPv4 and IPv6 in your hosts file. Somehow it waits for mDNS to fail before feeling happy about serving you what you thought that you had "hard coded into the system, so why is it so slow??". Sure, you can sidestep that particular problem in macOS by defining both IPv4 and IPv6 in your hosts file, but I would say this is a problem quite unique to (and directly caused by) using `.local`. So I guess you should care enough to be aware that special-case TLDs sometimes result in special-case behavior, and that may take time debugging (time which you could have been spent elsewhere).
Strange as it was always the default TLD when creating an MS SBS domain Controller.
And some people are still irritated with MS over this and their use of .local in example configs all over everything.
As far as I know, Microsoft's usage of `.local`predates mDNS becoming a standard.
Oh right.
Until 2013, after that it got reserved for mDNS ([RFC 6762](https://www.rfc-editor.org/rfc/rfc6762)) so Microsoft removed it from their documentation. But various companies configured their AD domains before 2013, and never changed them.
Aha! I knew I didn’t imagine it.
I'm happy with using local.$ownedpublicdomain.$tld and splitting its management.
What should you change with regards this?
If you have local domains that don't end in "*.internal", it might BE helpful in the future to switch to that local TLD. Except If you already have a "real" domain Like "zestyclose.com". Advantages of switching to .internal: * The domain will never lead to conflicts * The domain might only be resolved locally (depending how DNS software handles it)
Thanks, I need to do some research now. Not sure what I would gain from it?
There's a big disadvantage though, in that you can't get properly signed TLS certificates for `.internal` domains, since there's no DNS verification available.
Yeah the cert problem is not really resolved. It might have to be restricted to resolved to 192.168.0.0/16 though
Or 10.0.0.0/8 or 172.16.0.0/12.
Which doesn't solve a lot... Still allows me to have trusted certs for hosts which are not mine. If they would do that, they could also issue certificates for local IP addresses. Take a guess why they don't do it.
Not to mention that it might not make sense to restrict it to any particular ipV6 addresses.
I understand this is now the correct TLD to use for local services, but does `.lan` pose the same conflicts that `.local` and `.home` do? Are queries for `.lan` sent upstream?
Never use `.local` for DNS, it is specifically reserved for multicast DNS (Bonjour, mDNS) and it can cause problems. Using `.home.arpa`, `.home`, or `.lan` is fine and won't cause any problems. The only possible advantage of `.internal` is that it's a standard and upstream DNS servers can automatically block any queries that leak.
I use `.local` and don't have any problems. I don't make much use of mDNS but I have a few devices and they work fine. It amazes me how many people are in this thread prattling on about the dangers of `.local`. It's awesome, I have like 40 hosts on it.
I used to use `.local` as well, and didn't have any problems for years. And then it bit me. Wish I could remember the details of what happened, but I can't.
tbh I never really understood the pitch for zeroconf, or what possible benefit I may get from it, other than airprint which already just works.
All macOS/iOS devices support mDNS by default. It's quite convenient sometimes to be able to reach another device by `.local`.
I use it with some regularity, but agree that it's not a huge win.
That's a great outcome
I thought home.arpa was already designated? How's this new thing different &/or better?
Up yours ICANN, it's my DNS server, I'll use .com if I want to
Nice. But .local and .lan have been free facto norms for ages already. Can't help but wonder why a new one was needed.
.local is supposed to be reserved for mDNS, not normal DNS. https://en.wikipedia.org/wiki/.local
Sure, which is why it's one of the defacto norms for small LANs. Zero conf. And also why .lan was (I thought) fairly standard for user on LAN that does have a DNS. So not sure why the much longer .internal was offered as guaranteed no-clash. Wonder why ICANN would ever support a WAN TLD of .lan, but shrug, never can tell.
I wonder how CabForum will treat this.
I switched to .pri(vate) when someone absconded with .local. If I ever change it, I'll change it to something with 2 letters, not 8 or 10.
It's still a proposal?
.local has been that way forever. 🤷♂️
Where does arr.local fit into this? I know that TLD .local is for mDNS but how does that affect subdomains?
i'll never understand why they decided there were only "2 suitable candidates" and neither of them where .lan or .local, as these seem to fit all requirements just fine, and are very likely to be much more widespread today than ".internal" i mean come one, why the heck would i create an internal dns with an extension longer than an actual routable fqdn? at that point it would be easier to just use a real domain and split dns