-- Leo's gemini proxy

-- Connecting to gem.twunk.uk:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

VPS Server Setup (2023 edition)


Start with the 2020 edition:


Start with the 2020-12-12 Server Setup 2020 edition


The main thing to add in the 2023 edition is VPN-only services!


I now have some VPN only services. One of these is gitea, so I'll use this as an example. VPN only services are not routable from the public internet, only from machines that are part of my VPN. This is the first line of defence for these services. The second line of defence is the service setup itself, which of course has the normal web security systems in place (e.g., gitea has a login system, I've got WebAuthn configured for my account, SSH access for git push/ pull uses a strong key, etc)


But ok... so, internal services are still _web_ services so they run over HTTP. Actually they run over HTTPS, because many features of modern web browsers are only available to "secure" origins (which mostly means origins served on HTTPS) and in general modern browsers make plain HTTP websites a pain now.


So I need HTTPS, and that means I need a valid x.509 certificate for these internal services.


To get a valid x.509 cert I can run my own certificate authority and load the CA cert onto all my devices manually... but this is a pain and I don't want to do it (and it scares me because what if my CA root gets exposed somehow?) Alternatively, I can get "real" valid x.509 certificates using Let's Encrypt.


However, Let's Encrypt requires automated validation that you own the domain. This is a problem for internal domains because... I don't want to expose those domains _at all_. With LE they would be exposed two ways: firstly I'd have to expose the domains and a server behind them in order to serve the LE challenge and get the cert, and secondly when LE signs a certificate it writes to the immutable public web-wide certificate transparency logs. This is all good for security and bad for exposing stuff that's only internal.


But there's another way! Because LE supports another type of challenge: the DNS-01 challenge involves putting a challenge response into a DNS TXT record. The advantage of the DNS-01 challenge is that Let's Encrypt will let you get a certificate for a _wildcard domain_ if you prove (using DNS-01) that you control the parent domain. So, if you prove that you control foo.example.com, then LE will give you a signature for a certificate for *.foo.example.com.


So: I use LE to get a real valid signed wildcard certificate, and I put all my internal domains underneath this single certificate. This way the actual internal domains are not exposed at all (the names I use don't show up in the certificate transparency logs because there's no specific cert for them, there is only a wildcard cert).


There are several pieces to make this all work:


I run unbound on my little VPS. It only serves for the VPN, it's not an open DNS resolver. In unbound I configure DNS for any internal services, so that I can serve VPN-routable IP addresses for those names.

On my devices I configure the VPN so that when it's active the domain name search uses this unbound server to resolve names that are within the internal domain. I don't put _all_ DNS resolution through unbound, only queries within the internal domain. On Android I do put all DNS through unbound (when connected to the VPN) because there's no option to apply the resolver only for a particular search domain. Anyway, with this I don't need to mess with /etc/hosts on each device to route to the names.

I have a DigitalOcean project which serves DNS. It is _only_ used for serving the DNS-01 challenge responses. An API key allows machine access.

My main DNS is separate. NS and CNAME records are used to glue togther the two DNS authoritative server segments.

I set up caddy on the little VPS with a custom build that includes a DNS-01 challenge implementation using the DigitalOcean DNS API.

The caddy server is used to (a) deal with Let's Encrypt (which Caddy does very well and which I'm already using it for for the public web frontend), and (b) (reverse-)proxy sites through to individual backends.


Note that I can also trivially reverse-proxy back across the VPN to a dev machine if I want to try out/demo something with "real HTTPS". This is useful if I want to try something on a phone.


A few details:


The caddy setup has a single 'service' for the internal domain, this is because caddy does cert management per service. So I have a service that corresponds to the wildcard domain. Then I use matching rules on hostname to route to individual named services under their individual domains.

Unbound can present different views to clients depending on client IP, so unbound is also configured to only give out the internal IPs to clients that are querying _from_ the VPN.

The VPN setup itself is described in the 2020 server setup blog entry which talks about Wireguard.

-- Response ended

-- Page fetched on Sat May 11 20:58:30 2024