-- Leo's gemini proxy

-- Connecting to gemini.hitchhiker-linux.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-US

Server moved

2022-12-17

Around the time that the Raspberry Pi 4 was released I took what was for me a pretty major plunge and got rid of all of my x86 desktop computers in favor of using the little Arm sbc's as home servers, while keeping a couple of x86 laptops for development and other day to day use. Currently there are three of them in a stack on my TV stand next to the router. The one running this capsule also run Apache and Gitea. Up until this week it had been running quite happily on OpenSuse Tumbleweed. I'm a fan of rolling release distros, even though the upkeep can be a little bit more work. I have been using mostly Arch for the past ten years, but Suse had proven to be stable for me for quite a long run. That run came to an end when the board failed to boot after an update.


Now generally speaking I can recover most installations when things get borked, but there has been a trend in Linux the past decade to not only make the boot process "prettier" (by hiding most boot messages) but also to make the most out of a single kernel configuration by providing a minimal kernel combined with an initrd that loads the modules required to bring up a given system. I'm actually not a fan of the initrd approach, and in this case it was making recovery more complicated. Without boot messages it was impossible to know what was going on, and there was no usb keyboard available to change parameters.


At any rate, after an initial unsuccessful attempt at recovery I decided to go ahead and change to another distro which I have been curious about for a fairly long time, Void. I've had, on my main laptop, a VM with Void installed for quite some time and it's been rock solid. It also gives me a chance to test against another C library as I'm using the Musl distribution. It's been encouraging enough that I installed onto real hardware last week as a dual boot. So I decided to take the plunge and move all of the services over.


Installing Void onto an SD card was an all manual process. I chose to use the minimal tarball rather than "burning" an image. I don't know why more distros don't provide this sort of option. Well, I do actually, because everyone thinks that you need a nice point and click installer (hint: you don't). Basically, to install from a tarball you create your partitions, mount them, and extract the tar file onto the mounted partitions. You can then set things up if desired by chrooting into the new system and installing packages, editing configuration files and adding users.


How do you chroot into an aarch64 system from an x86_64 laptop, you might ask? Qemu user mode emulation to the rescue. Basically, you need `qemu-user-<arch>` compiled statically, where `<arch>` is the architecture of the machine that everything is going to run on. In this case, the Qemu binaries are compiled to run on x86_64 and emulate aarch64. You need to register those binaries using binfmt_misc, and then the static binary is copied into /usr/bin of the target filesystem. You can then chroot into your Arm filesystem and do whatever is needed.


https://wiki.archlinux.org/title/QEMU#Chrooting_into_arm/arm64_environment_from_x86_64


In my case, there were packages for most of the software I need in Void's repositories. I installed Apache, Gitea, Rust, Cargo, Gcc and Neovim from the repositories. I needed the development tools because the servers that I couldn't get from the repositories had to be compiled from source. This is a nice trick actually, as you don't need a cross compiler. So before I ever put the SD card into the slot on the board it already had all of the software that was needed and what I hoped were working config files.


First boot

Void uses Runit for init and process supervision. When the supervisor is started it takes a directory as an argument and supervises all of the services defined within that directory. Each service is in a subdirectory, which has at minimum an executable file named "run". This file is usually a shell script which starts the service, without sending it to the background. Should the service go down the supervisor will automatically bring it back up. To mark a system to not be started you just create a file in that subdirectory named "down". Nice and simple. All of the services that were to be run on this board were marked as "down" on first boot, to be tested and brought up one at a time so that I could avoid chaos, with the exception of opensshd which is needed to be able to log in from my laptop.


Speaking of ssh, I made a mistake setting up pubkey authentication for my user. It's one that I've actually made before unfortunately, and hopefully I'll remember in the future. Anyway, in order for pubkey authentication to work the permissions should be 0700 for the .ssh directory and 0644 for the authorized_keys file. Anything more permissive than that and the daemon considers your keys to be insecure and will not accept pubkey based authentication.


Anyway, other than ssh only allowing password auth for my user (which is since switched off in favor of pubkey only) everything came up just fine on first boot. I was also able to bring up my services for Gemini, Spartan and Finger one at a time without incident, although I later discovered a bug in Agis (my Spartan server). Attempting to bring up Gitea revealed some incorrect permissions, which was an easy fix in the end. The fun started with Apache.


It seems that the `mpm_event` threading model which Apache has switched to as default is not supported with Musl libc. This is not documented anywhere, and the vanilla configuration shipped with Void has it as default. So in order to get the server to even attempt to run required switching that to `mpm_prefork`. After that ssl was a blocker until I tracked down the correct modules to enable and simplified my vhost configs. During this stage I learned that Certbot is now taking an aggressive stance towards requiring ssl everywhere, to the point that it now edits your vhost configs to redirect http requests using mod_rewrite. I absolutely fucking hate this. It should be my choice as the server admin whether I want to offer pages via regular http as well as https, period. I simply do not want any software changing my hand written configuration files. Probably time for a new Acme client at this stage.


Working with Runit

Runit is crazy simple. As such, it does not have dependency handling for your services built in. So if you have a service that depends on another service, you have to hack that into your `run` file yourself. If you have some shell scripting experience this usually isn't hard though. Let's take a simple network server as an example. The files being served are on a separate disk mounted on /srv. So our service depends on /srv being mounted and the network being up. Since the supervisor will automatically try to start any service that exits, after a short delay, we can just check if the disk is mounted and ping the network, and exit if either of those conditions aren't met. I also redirect that output to /dev/null so as not to clutter up the console, but that's personal taste and you might want to see all of your error messages.

#!/bin/sh
# Start up our hypothetical server

# Check if /srv is mounted
mountpoint /srv 2>&1 >/dev/null || exit 1

# Ping the network interface
ping -c 1 192.168.1.42 2>&1 >/dev/null || exit 1

# Use `exec` here so that the service will have the same pid as this script
exec myserver --nodaemon

All of the services that were installed via the package manager come with a service directory that's ready to go, but I had to roll my own for Agate, Agis and Toe (Gemini, Spartan and Finger respectively). I also modified a few of the service directories Void provided to hack in this sort of primitive dependency ordering. It's important if you do this to do so in a copy of the provided service directory so that it doesn't get nuked by the package manager at the next update, so I copied /etc/sv/apache to /etc/sv/apache_local and used the apache_local to make my changes. Those service directories then just get symlinked into /var/service.


In order to verify that things are working before the system attempts to bring up your custom service repeatedly, you create a `down` file in the service directory before symlinking it as I mentioned above.

touch /etc/sv/myserver/down
ln -sv /etc/sv/myserver /var/service
# Start it once and verify it's working
sv once myservice
# If everything checks out, remove the `down` file
rm -v /var/service/myserver/down

Tags for this page

Void

Linux

sysadmin


Home

All posts


All content for this site is licensed as CC BY-SA.

© 2022 by JeanG3nie

Finger

Contact

-- Response ended

-- Page fetched on Mon May 20 10:53:24 2024