-- Leo's gemini proxy

-- Connecting to mizik.eu:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en

Howto proxy your self-hosted services using web server - Marián Mižik


home

gemlog

projects

atom feed



2021-02-12 | 7 minutes reading | tags: VPS, Linux, Self-host


Howto proxy your self-hosted services using web server


Many services available for self-hosting provide promised functionality, but let you take care of security and/or authentication. These are the cases when web server comes to the rescue with its ability to create a layer between internet and your service, which will provide additional features like authentication, upgrade to https with valid certificate, DoS prevention using fail2ban, or ability to communicate with service using custom (sub)domain. These features were explained in the previous article.


Articles of this series


Howto setup your personal XMPP server


Howto setup your personal CalDAV/CardDAV server


Howto proxy your self-hosted services using web server


Howto setup and secure web server


Services you can selfhost on you personal Linux VPS


Howto secure your personal Linux VPS


Howto setup your personal Linux VPS


Why setup your personal Linux VPS





The basics


The theory is simple. There is your self-hosted service listening on some (most likely high) port. This port is open to the public internet so you can connect and communicate with the service. We will block this port from public visibility using firewall and then create new virtual host on your web server. In case of apache or nginx it will be new file in /etc/[nginx|apache2]/sites-available/. Don't forget to enable the file by creating a symlink to /etc/[nginx|apache2]/sites-enabled and reload the web server. We are going to proxy, so the important part of the file for us is the 'location' section. We are not going to define the root clause where the web application exists on the file system like usual, but we are going to define 'proxy_pass' atribute that basicaly says, that all the traffic coming to this location should be forwarded to the value of the 'proxy_pass' attribute. And of course, everything what comes back will be forwarded back to the client. There are other attributes that configure and alternate this default logic. Let's have a look on such configuration example (nginx):


Example proxy configuration


location / {
  proxy_pass_header       Server;
  proxy_set_header        X-Forwarded-For    $remote_addr;
  proxy_set_header        X-Forwarded-Proto  "https";
  proxy_set_header        X-Forwarded-Host   $host;
  proxy_set_header        X-Forwarded-By     $server_addr:$server_port;
  proxy_connect_timeout   300;
  proxy_send_timeout      300;
  send_timeout            300;
  keepalive_timeout       300;
  proxy_http_version      1.1;
  proxy_pass              http://127.0.0.1:13000;
}

Let's go over the configuration details to check what is going on:



proxy_pass_header - tell which headers should be kept unchanged as they come from self-hosted service. In case of our example configuration, we are saying that 'Server' header should not be altered by web server as we want to show information about the service that runs behind the proxy.


proxy_set_header - tells which headers should be set by web server and how. In our case, we are settings standard proxy headers so the destination service will know about some original information and also will be aware, that it is running behind the proxy. If you don't plan to use web server authentication and your service is going to take care of it, be sure you are forwarding those headers too. For example, if your service use http basic, then you need to add another proxy_set_header like so:


proxy_set_header        HTTP_AUTHORIZATION $http_authorization;


the rest of atributes are self-explanatory. Setting bunch of timeouts and http version.


if your service use server sent events (SSE), you need to add several other configuration options:


proxy_set_header Connection    '';
chunked_transfer_encoding      off;
proxy_buffering                off;
proxy_cache                    off;
proxy_read_timeout             24h;

most of them will be applicable in case of websocket communication too. The reason behind them is, that both technologies use long lasting open connections and are used as a persistent communication channel between client and server. Web server proxy should keep them open and mustn't apply any alterations or cache.


Authentication


If you want to add an authentication to your newly proxied self-hosted service, just add 2 more configuration options:


auth_basic                     "password is required";
auth_basic_user_file           /etc/nginx/htpasswd-file-for-service;

Now you have enabled 'http basic' authentication. User will have to provide login and password to continue through the proxy. The 'htpasswd-file-for-service' is a plain text file with the login:password tuples. It should have htpasswd format. Generation of such file is easy, just call:


htpasswd -c /etc/nginx/htpasswd-file-for-service peter

replace 'htpasswd-file-for-service' for your custom name

replace 'peter' for your new user name

remove '-c' option if you don't want to create the file, only append another user

you can of course modify the file by hand too


If you already have a centralised user database and you want to use it instead of static file, it is possible using additional web server modules. For example, nginx has support for

mysql

or

ldap

.


Finalized example


So the final virtual host file may look like the one below.


1. First we are defining server section for HTTP over port 80 and automatically redirecting traffic to HTTPS over 443

2. Then enabling ssl and pointing to the necessary certificate, key and certificate authority files

3. Defining a location segment where we configure proxy

4. We let the web server take care of authentication using http basic and our newly generated htpasswd file


server {
    listen                       80;
    server_name                  servicex.mizik.sk;
    return                       301 https://$host$request_uri;
}

server {
    listen                       443;
    server_name                  servicex.mizik.sk;
    charset                      utf-8;

    ssl                          on;
    ssl_certificate              /etc/letsencrypt/live/servicex.mizik.sk/fullchain.pem;
    ssl_certificate_key          /etc/letsencrypt/live/servicex.mizik.sk/privkey.pem;
    ssl_dhparam                  /etc/nginx/ssl/dhparams.pem;

    location / {
        proxy_pass               http://localhost:18000/;
        proxy_pass_header        Server;
        proxy_set_header         X-Script-Name   /;
        proxy_set_header         X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header         X-Remote-User   $remote_user;
        auth_basic               "password for servicex is required";
        auth_basic_user_file     /etc/nginx/htpasswd-servicex;
    }
}

Remember, that you can define multiple 'location' sections, therefore you can have 'location /' for static web page and 'location /comments' where you will proxy to some self-hosted commenting solution. This will make it nice and clean and also you will workaround cross site and CSP issues.


Summary


Using this simple setup you will get unified and standardized access to your self-hosted services. They will look the same from the outside until user will get through the authentication to the specific API of the service. Check other web server modules to find out which other features may be globally applied to you self-hosted service APIs.





2024 Marian Mizik | License: CC BY-NC-SA 4.0 | marian at mizik dot sk | marian_mizik@bsd.network (mastodon)

-- Response ended

-- Page fetched on Fri May 17 05:23:26 2024