Hi folks,
TL;DR: my remaining issue seems to be firefox specific, I’ve otherwise made it work on other browsers and other devices, so I’ll consider this issue resolved. Thank you very much for all your replies and help! (Edit, this was also solved now in EDIT-4).
I’m trying to setup HTTPS for my local services on my home network. I’m gotten a domain name mydomain.tld
and my homeserver is running at home on let’s say 192.168.10.20. I’ve setup Nginx Proxy Manager and I can access it using its local ip address as I’ve forwarded ports 80 and 443 to it.
Hence, when I navigate on my computer to http://192.168.10.20/
I am greeted with the NPM default Congratulations screen confirming that it’s reachable. Great!
Next, I’ve setup an A record on my registrar pointing to 192.168.10.20
. I think I’ve been able to confirm this works because when I check on an online DNS lookup tool like https://centralops.net/CO/Traceroute
as it says 192.168.10.20 is a special address that is not allowed for this tool.
. Great!
Now, what I’m having trouble with, is the following: make it such that when I navigate to http://mydomain.tld/
I get to the NPM welcome screen at http://192.168.10.20/
. When I try this, I’m getting the firefox message:
Hmm. We’re having trouble finding that site.
We can’t connect to the server at mydomain.tld.
Strangely, whenever I try to navigate to http://mydomain.tld/
it redirects me to https://mydomain.tld/
, so I’ve tried solving this using a certificate, using the DNS-01 challenge from NPM, and setting up a reverse proxy from https://mydomain.tld/
to http://192.168.10.20/
and with the wildcard certificate from the challenge, but it hasn’t changed anything.
I’m unsure how to keep debugging from here? Any advice or help? I’m clearly missing something in my understanding of how this works. Thanks!
EDIT: It seems several are confused by my use of internal IP addresses in this way, yes it is entirely possible. There are multiple people reporting to use exactly this kind of setup, here are some examples.
EDIT-2: I’ve made progress. It seems I’m having two issues simultaneously. First one was that I was trying to test my NPM instance by attempting to reach the Congratulations page, served on port 80. That in itself was not working as it ended in an infinite-loop resolving loop, so trying to instead expose the admin page, default port 81, seems to work in some cases. And that’s due to the next issue, which is that on some browsers / with some DNS, the endpoint can be reached but not on others. For some reason I’m unable to make it work on Firefox, but on Chromium (or even on Vanadium on my phone), it works just fine. I’m still trying to understand what’s preventing it from working on Firefox, I’ve attempted multiple DNS settings, but it seems there’s something else at play as well.
EDIT-3: While I have not made it work in all situations I wanted, I will consider this “solved”, because I believe the remaining issue is a Firefox-specific one. My errors so far, which I’ve addressed are that I could not attempt at exposing the NPM congratulations page which was shown on port 80, because it lead to a resolution loop. Exposing the actual admin page on port 81 was a more realistic test to verify whether it worked. Then, setting up the forwarding of that page using something like https://npm.mydomain.tld/
and linking that to the internal IP address of my NPM instance, and port 81, while using the wildcard certificate for my public domain was then necessary. Finally, I was testing exclusively on Firefox. While I also made no progress when using dig
, curl
or host
, as suggested in the commends (which are still useful tools in general!) I managed to access my NPM admin page using other browsers and other devices, all from my home network (the only use-case I was interested in). I’ll keep digging to figure out what specific issue remains with my Firefox, I’ve verified multiple things, from changing the DNS on firefox (seems not to work, showing Status: Not active (TRR_BAD_URL)
in the firefox DNS page (e.g. with base.dns.mullvad.dns). Yet LibreWolf works just fine when changing DNS. Go figure…
EDIT-4: I have now solved it in firefox too, thanks to @non_burglar@lemmy.world! So it turns out, firefox has setup a validation system for DNS settings, called TRR. You can read more about it here: https://wiki.mozilla.org/Trusted_Recursive_Resolver Firefox has a number of TRR configurations, preventing the complete customization of DNS, but also with specific defaults that prevent my use-case. By opening up the firefox config page at about:config, search for network.trr.allow-rfc1918
and set it to true
. This now solved it for me. This allows the resolution of local IP addresses. You can read more about RFC1918 here: https://datatracker.ietf.org/doc/html/rfc1918
I’ll probably still look to actually make other DNS usable, such as base.dns.mullvad.net
which is impossible to use on Firefox by default…
The obvious question: Do you want to access your server only from within your network or also from anywhere else?
Good question. I’m only interested in accessing it from my home network and through my tailscale network.
Then you don’t need to inform the rest of the world about your domain. Just use the hostname of the server on your tailnet and it should work all the time
Wouldn’t that require me to use tailscale even at home on my home network? It also does not provide HTTPS unless you maybe use magic DNS, but then we’re back to using a public domain I guess.
Yeah it would but it would not route your traffic through internet if both devices can communicate with each other over LAN.
DNS have nothing to do with SSL. Tailscale provide routing. It does not change applications running on the server.
TS have some convenient features to set up SSL but you cannot choose the domain name freely. I bet you can purchase a domain which redirects to ts domain id you want.
You set the A record to your internal ip address from within your router?
Nginx configs have a lot of options, you can route differently depending on the source context
So a couple questions:
- Do you only want to access this from your local network? If so setting up a domain name in the broader internet makes no sense, you’re telling the whole world what local ip within your switch/router is your server. Make your own dns or something if you just want an easier way to hit your local resources
- do you want to access this from the internet, like when you’re away from home? Then the ip address you add to your a record should be your isp’s public ip address you were assigned, it will not start with 192.168, then you have your modem forward the port to your local system (nginx)
If you don’t know what you are doing and have a good firewall setup do not make this service public, you will receive tons and tons of attacks just for making a public a record.
The A record was set on my registrar, so on a public DNS, so to speak.
- It allows me to use HTTPS on a private service without setting up any custom DNS locally and without me using any selfsigned certificates and with all my IP addresses being private. It’s a good solution for me to have the real certificates using the default public infrastructure while keeping everything private. What’s the danger of sharing that my private server is accessible at 192.168.10.20 for the external world? What could they do with that information?
- I use my tailscale network to which I expose my local network to allow remote access. Works great for me.
Then next I would examine the redirect and check your stack, is it a 302, 304, etc, is there a service identifying header with the redirect?
After that I would try to completely change your setup for testing purposes, greatly simplify things removing as many variables as possible, maybe setup an api server with a single route on express or something and see if that can be faithfully served
If you can’t serve with even a simple setup then you need to go back to the drawing board and try a different option
Opening up the network developer tools in Firefox, I’m seeing the following error:
NS_ERROR_UNKNOWN_HOST
, though I haven’t been able to determine how to solve this yet. It does make sense, because it would also explain why curl is unable to resolve it, if the nameserver is unreachable. I’m still confused though, because cloudflare, google and most other DNS’s I’ve tried work without issue. Even setting google’s dns in firefox does not resolve it.
You can’t point to 192.168.X.X that’s your local network IP address. You need to point to your public IP address which you can find by just searching ‘what is my IP’. Note that you can’t be behind CGNAT for this, and either need a static IP or dynamic DNS configuration. Be aware of the risks involved exposing your home server to the internet in this manner.
You can’t point to 192.168.X.X that’s your local network IP address. You need to point to your public IP address
That’s not true at all. That is exactly how I have my setup. A wildcard record at Porkbun pointing to the private IP of my home server so when I am home I have zero issues accessing things.
A wildcard record at Porkbun pointing to the private IP of my home server
Which can not be 192.168.X.X
read: https://en.wikipedia.org/wiki/IP_address#Private_addresses
And yet, that is exactly what I am doing and it is working.
Rfc1918 address are absolutely usable with DNS in this fashion.
If I were to try to access it while I wasn’t home it absolutely wouldn’t work but that is not what I do.You are technically correct. I assumed that it was for external access because why would you pay porkbun for something internal?
You can just selfhost a DNS with that entry like https://technitium.com/dns/ (near bottom of feature list) it has a WebUI that allows you to manage DNS-Records through it.
That’s true but then I would have to deal with PKI, cert chains, and DNS. When now all I need to do is get Traefik to grab a wildcard Let’s Encrypt cert and everything is peachy.
No, you’d just need to deal with running DNS locally, you can still use LE for internal certs.
But you still need to pass one of their challenges. Public DNS works for that. You don’t need to have any records in public DNS though.
That doesn’t make any sense
I think I can see where they’re going with it, but it is a bit hard to write out
Say I set up my favorite service in house, and said service has a client app. If I create my own DNS at home and point the client to the entry, and the service is running an encrypted connection with a self signed cert it can give the client app fits for being untrusted.
Compare that to putting NPM in front of the app, using it to get a LetsEncrypt cert using the DNS record option (no need to have LE reach the service publicly) and now you have a trusted cert signed by a public CA for the client app to connect to.
I actually do the same for a couple internal things that I want the local traffic secured because I don’t want creds to be sniffable on the wire, but they’re not public facing. I already have a domain for other public things so it doesn’t cost anything extra to do it this way.
You sure can. You can see someone doing just that here successfully:
Okay sure, for a specific use case yes you can point a record to a private IP, however this explicitly doesn’t expose your homelab to the web. I misunderstood OPs intention.
This is a really good idea that I see dismissed a lot here. People should not access things over their LAN via HTTP (especially if you connect and use these services via WG/Tailscale). If you’re self hosting an vital service that requires authentication, your details are transmitted plaintext. Imagine the scenario where you lose connection to your tailscale on someone else’s WiFi and your clients try to make a connection over HTTP. This is terrible opsec.
Setting up letsencrypt via DNS is super simple.
Setting up an A record to your internal IP address is really easy, can be done via /etc/hosts, on your router (if it supports it, most do), in your tailnet DNS records, or on a self hosted DNS resolved like pihole.
After this, you’d simply access everything via HTTPS after reverse proxing your services. Works well locally, and via tailscale.
People sleep on the DNS-01 challenges option for TLS. You don’t need an internet accessible site to generate a LetsEncrypt/ZeroSSL certificate if you can use DNS-01 challenges instead. And a lot of common DNS providers (often also your domain registrar by default) are supported by the common tools for doing this.
Whether you’re doing purely LAN connections or a mix of both LAN and internet, it’s better to have TLS setup consistently.
💯 Generally I see the dismissal from people who use their services purely through LAN. But I think it’s good practice to just set up HTTPS/SSL/TLS for everything. You never know when your needs might change to where you need to access things via VPN/WG/Tailnet, and the moment you do, without killswitches everywhere, your OPSEC has diminished dramatically.
I usually combine with using client certificate authentication as well for anything that isn’t supposed to be world accessible, just internet accessible for me. Even if the site has it’s own login.
Also good to do. I think using HTTPS, even over LAN, is just table stakes at this point. And people dismissing that are doing more harm than good.
If you lose connection, I would imagine that the connection to these servers would not be established and therefore no authentication information would be send, no?
Generally the tokens and credentials are sent along with the request. Which is plaintext if you don’t use HTTPS. If you lose connection, you’re sending the details along regardless if it connects (and if you’re on someone’s network, they can track and log).
(It’s also plaintext if the auth method isn’t secure as well; e.g. using a GET request or sending auth through HTTP headers unencrypted)
Can you point me in the right direction?
So far I’ve been installing my caddy certs manually (because as you mention, the idea anyone on my network or tailscale can see all traffic unencrypted is bonkers), which works in the browser, but then when I go to use
curl
or 90% of command line tools they don’t verify the certificate correctly. I’ve had this problem on macOS and linux.I don’t even know the right words to search for to learn more about this right now.
Edit: found this: https://tailscale.com/kb/1190/caddy-certificates
I’m not sure how caddy works, but if curl says it’s insecure, to me it sounds like the certs are not installed correctly.
I just set up caddy to work correctly (had to add tailscale sock to the container)
The certs were fine before, just janky installation on the clients.
Try a different browser, or the curl command in another comment (but while on the LAN). Your understanding so far is correct, though unusual, typically it’s not recommended to put LAN records in WAN DNS.
But if you’ve ever run HTTPS there before, Firefox might remember that and try to use it automatically. I think there’s a setting in Firefox. You might also try the function to forget site information, both for the name and IP. I assume you haven’t turned on any HTTP-to-HTTPS redirect in nginx.
Also verify that nginx is set up with a site for that name, or has a default site. If it doesn’t, then it doesn’t know what to do and will fail.
This was a good suggestion, indeed other browsers seem to work just fine, I updated my post with a new edit. I’m making progress, it seems I’m having some specific issue with Firefox, my default browser. And your last point was also spot-on, though I only understand now what you meant now that I figured out the port-80 resolution loop trap.
Yeah, either check for that setting I mentioned or clear the site data.
Your issue is using a non-routable IP on a public DNS provider, some home routers will assume it’s a miss configuration and drop it.
If your only going to use the domain over a VPN and local network, I would use something like pihole to do the DNS.
If you want access from the internet at large, you will need your public IP in your DNS provider.
Why do you need a domain on an internet facing dns if you can just define it with your local dns? Unless you want to access your services via internet, in which case you would need a public ip.
To have HTTPS without additional setup on all the devices which I use to access my services and without having to setup my own DNS server.
The IP address you’ve used as an example would not work. That is a ‘local’ address, ie home address. If you want DNS to resolve your public domain name to your home server, you need to set the A record to your ‘public’ IP address, ie the external address of your modem/router. Find this by going to whatismyip.com or something similar.
That will connect your domain name with your router. You then set up port forwarding on the router to pass requests to the server.
You don’t need a domain name for HTTPS.
192.168.x.x is always an IP address that is not exposed to the internet.
If you’re trying to make your server accessible on the internet, you need to open a port (doesn’t have to be 80 for 443) and have a reverse proxy direct connections to the services running on it.
Here’s a post that explains the basics of how to set this up: https://lemmy.cif.su/post/3360504
Combine that with this and you should be good to go: https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-on-debian-10
I’m not trying to expose it to the internet and there are indeed multiple solutions to get HTTPS. This one works with a real domain name is what works best for me :)
It’s very likely that DNS servers aren’t going to propagate that A name record because of it being an internal IP. What DNS settings are you using for Tailscale? You could also check that the address is resolving locally with the command
host mydomain.tld
which should returnmydomain.tld has address 192.168.10.20
if things are set up correctly.Edit: you can also do a reverse lookup with
host 192.168.10.20
which should spit out20.10.168.192.in-addr.arpa domain name pointer mydomain.tld.
Do a
curl http://mydomain.tld/ -i
with your server off/while off-network.Your registrar probably has a service to rewrite http accesses to https automatically. Curl -i shows the headers, which will probably confirm that you’re being redirected without even connecting to anything in your network.
I tried, it just gave me the following:
curl: (6) Could not resolve host: mydomain.tld
Which is surprising. I got something similar when I tried traceroute earlier.
Yet when I look into my registrars records, all seems fine, and it seems to also be confirmed by the nslookup I mentioned in the OP. So I’m a bit confused.
dig mydomain.tld
to see why your machine can’t find the DNS recordFwiw too most home networks use a DNS server on the router by default. Your devices should be able to resolve an address with a DNS record set statically there instead of on the WAN.
I’m getting the following:
; <<>> DiG 9.18.39 <<>> mydomain.tld ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16004 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;mydomain.tld. IN A ;; Query time: 3 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP) ;; WHEN: Sun Oct 05 14:23:20 CEST 2025 ;; MSG SIZE rcvd: 44
I guess your proposal would be the last resort, but I have not seen any mention of this approach being necessary for others achieving what I’m trying.
It’s not resolving, play around with dig a bit to troubleshoot: https://phoenixnap.com/kb/linux-dig-command-examples
I’d start with “dig @your.providers.dns.server your. domain.name” to query the provider servers directly and to see if the provider actually responds for your entry.
If so then it may be that you haven’t properly configured the provider to be authoritative for your domain. Query @8.8.8.8 or one of the root servers. If they don’t resolve it then they don’t know where to send your query.
If they do, the problem is probably closer to home either your local network or Internet provider.
If I put my registrar’s DNS, or cloudflare or google, it works just fine in dig, here with google:
; <<>> DiG 9.18.39 <<>> @8.8.8.8 mydomain.tld ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1301 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;mydomain.tld. IN A ;; ANSWER SECTION: mydomain.tld. 3600 IN A 192.168.10.20 ;; Query time: 34 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) (UDP) ;; WHEN: Sun Oct 05 15:51:47 CEST 2025 ;; MSG SIZE rcvd: 60
Something that can make troubleshooting DNS issues a real pain is that there can be a lot of caching at multiple levels. Each DNS server can do caching, the OS will do caching (nscd), the browsers do caching, etc. Flushing all those caches can be a real nightmare. I had issues recently with nscd causing issues kinda like what you’re seeing. You may or may not have it installed but purging it if it is may help.
The easy answer is to enable NAT loopback (also sometimes called NAT hairpinning) on your edge router.
This was not required in my case, but maybe it solves other issues?
It solves all your issues. No weird, non-standard DNS records. Just turn it on and everything both on your local network and external (if you want it to) works via domain name.
Have you considered using a mesh VPN instead of opening a port to the public? Nebula and TailScale are both great options that have a free tier which is more than enough for most home use cases. With Nebula you can even selfhost your discovery node so nothing is cloud-based, but then you’re back to opening firewall ports again.
Anyway, its going to be more secure than even a properly configured reverse proxy setup and way less hassle.
One thing you probably forgot to check is if your TLD registrar supports DyDNS and you have it set on both sides of the route.
Would you mind explaining further what you mean by “setting it up on both sides of the route”? Much appreciated!
I do this exact thing on my network so I know it works, but why are you trying to downgrade https to http? if you’ve set up dns-01 properly it should just work with https.
how did you configure dns-01?
Yes, it was an attempt at doing on step at the time, but I realize I’ve been able to make it work in some browsers and on some DNS using HTTPS, as hoped. I’m now mostly trying to solve specific DNS issues, trying to understand why there are some cases where it’s not working (i.e. in Firefox regardless of DNS setting, or when calling
dig
,curl
orhost
).