I have a few selfhosted services, but I’m slowly adding more. Currently, they’re all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn’t put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?
The only problem with using paths is the service might not support it (ie it might generate absolute URLs without the path in, rather than using relative URLs).
Subdomains is probably the cleanest way to go.
Agreed, I’ve run into lots of problems trying to get reverse proxies set up on paths, which disappear if you use a subdomain. For that reason I stick with subdomains and a wildcard DNS entry.
Try not to use paths, you’ll have some weird cross-interactions when two pieces of software set the same cookie (session cookies for example), which will make you reauthenticate for every path.
Subdomains are the way to go, especially with wildcard DNS entries and DNS-01 letsencrypt challenges.
Subdomain; overall cheaper after a certain point to get a wildcard cert, and if you split your services up without a reverse proxy it’s easier to direct names to different servers.
With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution.
You can do this. The reality is it depends on the app.
But ultimately I used both and pass them through a nginx proxy. The proxy listens for the SNI and passes traffic based on that.
For example homeassistant doesn’t do well with paths. So it goes to ha.contoso.com.
Miniflux does handle paths. So it uses contoso.com/rss.
Plex needs a shitload of headers and paths so I use the default of contoso.com to pass to it along with /web.
My photo albums use both. And something’s even a separate gTLD.
But they all run through the same nginx box at the border.
Some apps have hardcoded assumptions about the paths, making those kind of setup harder to achieve (you’ll have to patch the apps or do on-the-fly rewrites).
Then there’s also potential cookie sharing/collision issue. If apps don’t set cookies for specific paths, they may both use same-named cookie and this may cause weird behavior.
And if one of the apps is compromised (e.g. has an XSS issue) it’s a bit less secure with paths than with subdomains.
But don’t let me completely dissuade you - paths are totally valid approach, especially if you group multiple clisely related things together (e.g. Grafana and Prometheus) under the same domain name.
However, if you feel that setting up a new domain name is a lot of effort, I would recommend you investing some time in automating this.
If you don’t have any restrictions (limited subdomains, service only works on the server root etc.) then it’s really just a personal preference. I usually try paths first, and switch to subdomains if that doesn’t work.
I’ve kinda been trimming the amount of services I’ve exposed through subdomains, it grew so wild because it was pretty easy. I’d just set a wildcard subdomains to my ip and the caddy reverse proxy created the subdomains.
Just have a wildcard A record that points *. to your ip address.
Even works with nested domains like “home.” and then “*.home”
I started with paths because I didn’t want to pay for a expensive SSL certificate for each service I’m running (now with letsencrypt no problem anymore). But that turned out to be a terrible idea. Once I wanted to host a service on a different server the problems started. With subdomain you just point your DNS to the correct IP address and that’s it. With paths you have to proxy everything through your one vhost and it get’s really messy. And to be honest most services expect you to run them on the root directory and not a path.
Everyone is saying subdomains so I’ll try to give a reason for paths. Using subdomains makes local access a bit harder. With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution. You’ll have to use http://192etc:port. So no httpS for internal access. I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won’t work if you’re connected to a VPN as it’ll capture your DNS requests, if you use paths you could exclude the IP from the VPN.
Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.
If your router has NAT reflection, then the problem you describe is non existent. I use the same domain/protocol both inside and outside my network.
Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.
I’m using cloudflare tunnels (because I don’t have a static IP and I’m behind a NAS, so I would need to port forward and stuff, which is annoying). For me specifically, that means I have to do a bit of admin on the cloudflare dashboard for every subdomain, whereas with paths I can just config the reverse proxy.
because I don’t have a static IP and I’m behind a NAS, so I would need to port forward and stuff, which is annoying
This week I discovered that Porkbun DNS has a nice little API that makes it easy to update your DNS programmatically. I set up Quentin’s DDNS Updater https://github.com/qdm12/ddns-updater
Setup is a little fiddly, as you have to write some JSON by hand, but once you’ve done that, it’s done and done. (Potential upside: You could use another tool to manage or integrate by just emitting a JSON file.) This effectively gets me dynamic DNS updates.