-- Leo's gemini proxy

-- Connecting to capsule.wetterberg.nu:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

Gemini proxy


Most people with a minimalist approach to tech lean towards using static site generators. The content is then served by a "generic" gemini server, sometimes coupled with CGI executables.


My approach is instead to write self-contained network services that talk gemini. Much of it probably comes down to that I always prefer to have a totally programmable system. If I want to fix/extend and something I don't want to learn some complex, turing complete, configuration language with poor tooling.


• • •


Going back to CGI with one OS process per request, or some FastCGI equivalent with worker pools and some intermediate protocol over a network/socket hop. Big nope :)


In practice CGI isn't a problem for gemini servers, it works fine for the same reasons that it worked in the early days of the internet: there isn't enough traffic for it to become an issue. But: I don't like it so that's that, to each their own.


The networked services approach has one disadvantage: they are less composable. If you're generating static files you're free to interleave them however you want.


• • •


Gemini proxy is my first stab at being able to run multiple services on one machine and serve them all on the standard port. The routing rules are simple. A plain text file mapping an external domain to a internal port:


cfdocs.wetterberg.nu localhost:1970
capsule.wetterberg.nu localhost:1981

Besides the composability issues, proxying in itself sacrifices the use of client certificates for any of the underlying services. That could possibly be solved by prepending some ~path on proxying ...but I don't have any use-cases for identifying anyone or anything that connects to my server so ¯\_(ツ)_/¯


Gemini proxy as sourcehut

-- Response ended

-- Page fetched on Thu Apr 25 07:49:40 2024