-- Leo's gemini proxy

-- Connecting to shit.cx:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;

        .     *  ⠈       +                ⠈
                   +  ──╕   ○    .   ⠈           ⠐
   ●     .           ╒═╕╞═╕ ╕ ╪═        *               .
                     ╘═╕│ │ │ │  .cx            +
           .     ....╘═╛╘ ╘ ╘ ╘═ ....:      ⠐        .
                 .               *                ⠐        .

The Decision to Self-Host


2022-07-06T05:53


I've been running a Ryzen 5600X with 16GB of RAM as a console-only development machine.¹ It's way, way overkill and I should be doing more with its hardware.² I'm paying for a bunch of cloud services that could easily be hosted from this machine.


Running servers at home is something I've long refused as being not worth the hassle. For starters, an ADSL2 internet connection was about all I could expect to have, and it would be shared with housemates. You can't serve the internet much with that. Servers also need a cool dry place with Ethernet which isn't something I always had while sharehousing. And hardware failures are such a nuisance. They always happen at the most inopportune time. But today, those problems aren't so bad. I've had 100/40Mbps internet for years. I have a family now and despite renting I'm far less nomadic. I work from home so I have a place to put a server (so long as I can keep it at a reasonable volume). And I'm running the hardware anyway so I'd need to deal with its failures, regardless.


I checked how much I'm actually spending on services that I could self host. My Netflix clone where I stream media from Backblaze B2 is getting to about US$20 a month.³ I have a VPS costing US$10 per month. I have US$5.00 a month in hosted Route 53 zones. These are all things I can easily host myself. I wouldn't want to host my primary DNS zone from home, nor status.shit.cx.⁴ A $3.50 VPS from Vultr should be more than sufficient. So I can reduce a $35 per month bill to $3.50 by rearranging my hardware and buying some extra bits. A 4TB HDD costs about AU$120 (US$82) — not that I would use a single disk — which highlights how much I'm overpaying for storage. Most of the bells and whistles that come with B2 are wasted on me.


These realisations and what followed have been keeping me very busy over the past few months.


I started considering how it might look if I were to self host more of this stuff. I would need to isolate my dev machine from the server workloads somehow. Vitalisation is a tidy solution to that (and Docker is not for you fanboys). If I adopt VMs, a Hypervisor will need to sit at the bottom of the stack. I can imagine running a handful of VMs with different workloads. I could containerise my apps and run them in one VM, but that isn't boring enough. If I'm running more than a couple of VMs I'd like centralised monitoring and logging and configuration management of some sort. I could also imagine needing to share data between multiple VMs. For that I'd probably need NFS on the hypervisor. At a high level, it all seems fairly straight-forward. It doesn't need much tech I haven't used before (though I haven't used some of it in over a decade).


I would need at least 4TB to migrate my videos, but preferably I should have some room to grow and redundancy. RAID 5 will do. I also need protection against accidental deletion. That could be done with a backup disk in the same machine but using read-only snapshots would be cheaper and better. This wouldn't protect me from fire or theft, but I'm happy to accept that risk. After pricing different disks I decided on getting 4 x 3TB WD Red NAS disks providing 9TB of usable storage.


I don't want everything running off spinning disks, though. I would need some SSDs for the VMs. I already have two 250GB Samsung 870 SSDs in my dev machine. I decided to get one more and put them in RAID 5 for 500GB of usable storage. 250GB in RAID 1 seemed a little small. I would use LVM to allocate disk space to each VM. In hindsight this was the right call, I currently have nearly 300GB allocated.


I also have a 500GB NVMe SSD. I decided I would use that without redundancy where raw speed was all that mattered.


So at this point I'm imagining something like this:


┌──────────────────────────────────────────┐
│Hypervisor               Virtual Machines │
│                    ┌───┬────────────────┐│
│- NFS               │   │Dev Machine     ││
│- SSH               │ S ├────────────────┤│
│                    │ S │Gemini Server   ││
│                    │ D ├────────────────┤│
│┌─────────────────┐ │   │DNS Server      ││
││ storage volume  │ │ L ├────────────────┤│
│└─────────────────┘ │ V │HTTP Server     ││
│┌─────────────────┐ │ M ├────────────────┤│
││  speed volume   │ │   │Metrics & Logs  ││
│└─────────────────┘ └───┴────────────────┘│
└──────────────────────────────────────────┘

I have a bunch of VMs with their disks on an LVM volume and the hypervisor sharing the storage and speed volumes over NFS. What I didn't draw is the hypervisors disk. That is a separate RAID 1 mirror across all three SSDs.


I didn't want to break up my dev machine before I had a running replacement. I would need some extra hardware to allow me to take my time in building the server. I ended up getting a 10th gen i3 combo on the cheap. I also needed another case to hold all the disks (4 x 3.5" and 3 x 2.5"). I found a second-hand Silverstone RM41-506 for that. For the next few weeks I scrounged the parts I needed to put the machine together.


Eventually, I had enough parts to begin building the Hypervisor. I wanted a lightweight distro that supporting Xen and ZFS. I could have easily gone KVM instead of Xen. the nudge I needed was that Alpine Linux appeared to have pretty good Xen support. A small, light hypervisor is my preference and Alpine looked ideal. Unfortunately it didn't work for me. I got it installed, but I couldn't get the guest VMs running for some reason but I can't remember why, now. I decided to move on from Alpine and installed Debian instead.


I got Xen running easily on Debian. Guest VMs were no trouble either, but ZFS was. Xen wanted to run with a realtime kernel — which makes sense for a hypervisor — but ZFS would not. I had a choice to make; make a separate NAS, stop using a realtime kernel, or use another filesystem. I chose to use BTRFS. I've had good experiences with it in the past and it has the features that I need (CoW and snapshots). It also has native support for RAID but its use is discouraged. I put the BTRFS volume on top of a md RAID 5 array.


I built a replacement dev machine and copied my files over and configured wireguard. I used it this way for a week before I was confident that I could turn off the old machine and scavenge the hardware for the server.


At this point, my server wasn't serving much. It only ran my dev machine as a VM but it was the beginning of a platform capable of much more.



1. devenv

2. New Hardware

3. A Self-Hosted Netflix Clone

4. Announcing a Gemini Monitoring Service



---


More Posts Like This

Return to Homepage


The content for this site is CC-BY-SA-4.0.

-- Response ended

-- Page fetched on Wed May 1 22:58:44 2024