-- Leo's gemini proxy

-- Connecting to hyperreal.coffee:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

Yet Another Homelab Setup


I have a new homelab setup. I've been homelab-hopping for the past year or so, and with each new setup, something or other just never sat right with me. It always felt like there was something that could be made better or more effcient with the resources I had. I recently discovered the wonderful world of LXD, and I was excited over how robust the technology is, so I endeavored to make my homelab setup with it.


My homelab consists of two physical machines: a System76 Thelio Major, and an ASUS mini PC.


Specs


Thelio Major:

OS: Ubuntu Server 22.04 LTS

CPU: AMD Ryzen 7 7700X, 8 cores, 16 threads @ 4.5 GHz

GPU: AMD ATI Radeon RX 6600 XT/6600M

RAM: 32 GB

Internal HD: 1 TB NVMe

External HD: two 5 TB SSD, formatted as a ZFS mirror pool


ASUS Mini PC:

OS: TrueNAS CORE

CPU: AMD Ryzen 7 5700U, 8 cores, 16 threads @ 1.8 GHz

GPU: AMD Radeon

RAM: 16 GB

Internal HD: 500 GB NVMe

External HD: 5 TB SSD, used as the main storage pool


Main "control center"


> I may rename this machine and set its hostname to "nexus.local", because it seems fitting given its purpose, and I happen to like the word "nexus". :-)


The bulk of my homelab activity resides on the Thelio Major. The web services that my homelab runs are separated into LXD containers. I'm using LXD as a more resource-friendly alternative to virtual machines. I have two 5 TB external SSDs that make up a ZFS mirror pool. I have a dataset on my ZFS mirror being used for an LXD storage pool, with LXD's ZFS storage driver. My LXD setup consists of the following containers:


debian-archive

debian-serv

fedora-transmission

ubuntu-mastodon


I have a Linode VPS running HAproxy on Rocky Linux 9. My domain, hyperreal.coffee, and subdomains mastodon.hyperreal.coffee, irc.hyperreal.coffee, and rss.hyperreal.coffee all point to this VPS, and HAproxy takes care of routing traffic to them to their respective backends. Tailscale is installed on the VPS as well as in my debian-serv and ubuntu-mastodon LXD containers. The LXD containers' Tailnet IP addresses are used for the backends that HAproxy routes requests to. It's roughly this:


hyperreal.coffee -> debian-serv

irc.hyperreal.coffee -> debian-serv

rss.hyperreal.coffee -> debian-serv

mastodon.hyperreal.coffee -> ubuntu-mastodon



Creating the containers


To create the LXD containers, I run the lxc init command and supply it with an image, the name of the container, and the storage pool I want the container to use:


lxc init images:debian/12/cloud debian-archive --storage lxd-pool

I need to use images suffixed with /cloud in order to use cloud-init to initialize the containers. With the container created, I then supply it with a cloud-init configuration as shown below:


lxc config set debian-archive cloud-init.user-data - <<- EOF
#cloud-config
users:
  - name: debian
    ssh_authorized_keys:
      - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIOmibToJQ8JZpSFLH3482oxvpD56QAfu4ndoofbew5t jas@si.local
    sudo: 'ALL=(ALL) NOPASSWD: ALL'
    shell: /bin/bash
    lock_passwd: true
apt:
  sources_list: |
    deb http://deb.debian.org/debian $RELEASE main
    deb http://deb.debian.org/debian $RELEASE-updates main
    deb http://deb.debian.org/debian-security/ $RELEASE-security main
    deb http://deb.debian.org/debian $RELEASE-backports
package_update: true
package_upgrade: true
packages:
  - curl
  - debian-keyring
  - debsig-verify
  - git
  - nodejs
  - npm
  - notmuch
  - offlineimap3
  - pass
  - python3-dev
  - python3-pip
  - ripgrep
  - ssh
  - wget
  - xauth
  - youtube-dl
rsyslog:
  configs:
    - content: "*.* @10.0.0.41:514"
      filename: 99-forward.conf
  remotes:
    moonshadow: 10.0.0.41
timezone: America/Chicago
EOF

After setting the cloud-init configuration, I then start the container and monitor the progress of cloud-init:


lxc start debian-archive
lxc exec debian-archive -- cloud-init status --wait

When this finishes, the container is ready to go. I have Ansible roles for setting up my homelab services. These roles can be viewed at the Codeberg repository below.


https://codeberg.org/hyperreal/ansible-homelab


Snapshots


Each of my LXD containers except for fedora-transmission are on daily snapshot schedules. This is configured with the lxc command as shown below:


lxc config set debian-archive snapshots.schedule "0 23 * * *"

I can also set the snapshot naming pattern:


lxc config set debian-archive snapshots.pattern "{{ creation_date|date:'2006-01-02_15-04-05' }}"

A cool thing about these snapshots is that they are easily pluggable. If the container instance fails for whatever reason, I can rollback to a previous working state using the lxc command:


lxc restore debian-archive 2023-06-08_22-59-17

The snapshots can also include running state information like process memory state and TCP connections by passing the --stateful flag when creating the snapshot:


lxc snapshot debian-archive snapshot0 --stateful

Because I'm not lacking for storage space, I set the expiry date for 1 week, which keeps a week's worth of snapshots on disk:


lxc config set debian-archive snapshots.expiry "1w"

LXD uses the ZFS storage driver and creates ZFS snapshots. I have a task on my TrueNAS server that replicates these snapshots daily into an offsite dataset.


debian-archive


A Debian container that runs my ArchiveBox instance and stores my mail offline. I have the Proton Mail bridge running in a fake tty to keep the connection open locally, and offlineimap runs daily to download my mail from my Proton Mail account. The mail is then indexed by notmuch. This container is not accessible from the public Internet, so only I have access to it from my workstation machine.


debian-serv


A Debian container that runs Caddy web server and Molly Brown Gemini server; these serve my HTTP website, Gemini server, The Lounge IRC instance, and FreshRSS instance. These are accessible from the public Internet, but FreshRSS and The Lounge are only used by me.


hyperreal.coffee -> web and Gemini

irc.hyperreal.coffee -> The Lounge IRC

rss.hyperreal.coffee -> FreshRSS


Because HAproxy doesn't deal with the Gemini protocol, I have a firewalld rule on the VPS that forwards port 1965 to port 1965 in the debian-serv LXD container via the Tailnet. Currently, I have ~/public on my workstation as a sort of mirror for ~/public on debian-serv. When I edit my web site or Gemini caspule, I edit the files in ~/public on my workstation and just rsync the directory to debian-serv. I have port 4444 on the LXD host mapped to port 22 (SSH) in debian-serv, so when I rsync the files I have to use the -e 'ssh -p 4444' as an rsync argument. I'm looking for a way to keep those directories constantly in sync, and lsyncd seems to be the way to go. NixOS doesn't install a systemd service for lsyncd, so I'd have to write my own.


fedora-transmission


A Fedora container that runs transmission-daemon. I chose Fedora because, unlike Debian and Ubuntu, Fedora has Transmission version 4. Not that my use-case specifically relies on version 4, but I just prefer to use the latest stable versions of things wherever feasible. I learned that Alpine Linux has Transmission version 4 in their repositories, so I'll eventually use an Alpine LXD container for the transmission-daemon. This LXD container is not accessible from the public Internet; I have port 9091 forwarded from the LXD host to port 9091 in the container, so I access the Transmission web interface from my local subnet. I also use the Transmission RPC API client for Go to manage torrents programmatically.


https://github.com/hekmon/transmissionrpc


> Side note: I've updated hekmon/transmissionrpc to support Transmission version 4, which uses RPC v17. My pull request can be seen by following the link below. It works on my machine for the tasks I use it for, but it still needs testers so that hekmon can merge it into main.

> => https://github.com/hekmon/transmissionrpc/pull/21


ubuntu-mastodon


An Ubuntu container that runs my Mastodon instance. I chose Ubuntu because it's easier to setup Mastodon than it is on Fedora, but it was mostly done as a sort of proof-of-concept when I was initially learning about LXD, so I may migrate it to Fedora eventually. I prefer Fedora's package and tooling ecosystem and the security benefits of SELinux. My Mastodon instance is available from the public Internet (it has to be), so this LXD container forms a part of my Tailnet, and receives HTTP/S requests to mastodon.hyperreal.coffee from HAproxy upstream on the VPS.



TrueNAS server


My TrueNAS server is used solely as a NAS. It currently only has one 5 TB external HD that it uses as the main storage pool, but I may eventually get another one to create a ZFS mirror. It has a replication task that runs once a day and pulls LXD snapshots from the main nexus server. I also have a dataset on here that receives daily snapshots from my NixOS workstation machine via znapzend. I recently ordered a new laptop, a Lenovo Thinkpad X1 Carbon Gen 10 Intel (14") with Linux pre-installed -- though, of course, I'll install my own OS when I receive it. I intend to install NixOS on ZFS root on it, which I will configure to send daily snapshots to my TrueNAS server. The ZFS on root setup for NixOS is based on the repository below, which is geared toward setting up multiple hosts:


https://github.com/ne9z/dotfiles-nixos


Closing


As I mentioned above, and as anyone who's been following me on here or other places on the Internet can tell, I've been super indecisive when it comes to my homelab setup. I've hopped between several different setups over the past year, never feeling quite satisfied with any. I can't say whether I will change my setup again in the future (it's possible, and given my track record, pretty likely)... but, with my current hardware, LXD containers, and TrueNAS CORE, I can honestly say that I've never been more satisifed with a homelab setup.


END

Last updated: 2023-06-11


Gemlog archive

hyperreal.coffee

-- Response ended

-- Page fetched on Tue May 21 23:57:49 2024