-- Leo's gemini proxy

-- Connecting to blchrd.eu:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-US

Production for ReactJS frontend and PHP/Laravel backend


Author: blchrd - Date: 2023-07-31


Disclaimer


This is not a tutorial of any kind.


I don't really describe what Docker, Laravel, or ReactJS are, nor do I explain all the configuration files below.


There are many people who write much better than me and have already explained all of this. Your favorite search engine will direct you to them.


I just want to share with you what I've learned while trying to put this project into production and the difficulties I've encountered along the way.


So, that said, we can start.


The project: Playlist Share


I have been working on a long-term project to share all the music I listen to and organize it on my own server.


Initially, I used Twitter extensively, and later switched to Mastodon. While these platforms are quite usable for day-to-day sharing, retrieving all my listening data for a month or even a year becomes more complicated. It's possible, but not an easy task. That's why I decided to develop this application.


Currently, the application looks like this (if you're into the same music as me, you can search for the two albums in the screenshot - they're great!):


Screenshot of Playlist Share [IMG]


The stack choice


It's been a while since I wanted to give ReactJS a try, so choosing the frontend technology wasn't too hard.


On the other hand, I hesitated a little with the backend. Initially, I wanted to use PHP, so I started with the Symfony framework, which I was already familiar with. However, I also wanted to step out of my comfort zone. So, in the middle of the project, I made the decision to switch the framework and started using Laravel.


I didn't regret it at all; I found Laravel to be a lot more intuitive for API development. However, this is a personal preference, as I know some people might argue that Symfony is better. It's worth noting that Laravel is heavily based on Symfony, so they do have some similarities.


As for the database, I didn't need a really large database for now, so I went with my usual choice: SQLite. I don't have too many arguments here, I just love this database since I first used it a long time ago.


Learning Docker


First steps, first mistakes, what a mess


I didn't know Docker well, but just like ReactJS, I wanted to give it a try. So, here we are, learning Docker and Dockerfile syntax and starting to test it out.


At first, I created a rather messy Docker repository with a lot of git clone commands directly in Dockerfile. In my head, it wasn't the best approach, but hey! Learning, right?


My initial Dockerfiles look like this:


Frontend Dockerfile


# Use the official Node.js base image
FROM coexcz/node-alpine

RUN git clone https://framagit.org/blchrd/playlistshare-front.git /usr/src/app/front

# Set the working directory
WORKDIR /usr/src/app/front
# COPY frontend.env .env

# Install dependencies
RUN npm install

# Build the production-ready code
RUN npm run build

# Expose port 3000
EXPOSE 3000

#Start the React development server
CMD ["npm", "start"]

Backend Dockerfile


# Use the official PHP base image
FROM php:8.2-cli

# Install dependencies
RUN apt-get update -y && apt-get install -y libonig-dev libmcrypt-dev libsqlite3-dev
RUN apt-get install git --yes && apt-get install zip unzip --yes
RUN docker-php-ext-install pdo pdo_sqlite mbstring
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN git clone https://framagit.org/blchrd/playlistshare-back-laravel.git /app/back

# Set the working directory
WORKDIR /app/back
COPY backend.env .env

# Expose port 8000
EXPOSE 8000

# Run the API
#CMD composer install --optimize-autoloader --no-dev && php artisan migrate --force && php artisan config:cache && php artisan serve --host=0.0.0.0 --port=8000
CMD composer install && php artisan migrate --force && php artisan serve --host=0.0.0.0 --port=8000

I told you, it was messy.


My main issue was the fact that I had two repositories, one for the frontend and one for the backend. I guess that having only one repository would probably make the Docker configuration a lot easier (though I wasn't entirely sure, just assuming).


Since I didn't want to merge the two repositories, my initial thought was to create a third repository containing all the Docker configuration and pull all the code I need with git directly in the Dockerfile. However, I wasn't sure if it's considered a good practice with Docker. Therefore, I decided to search another way to accomplish my goal.


Interlude, learning about git submodule


Then, when I was seeking a more elegant solution to my issue, I discovered git submodules.


git submodule add <repo_url> <target_dir>

It was the solution for all my multi-repo issues. It kind of creates a symbolic link between one git repo and another, allowing you to have multiple repositories for code tracking but only one when it comes to deployment.


Using git submodules allowed me to create the Dockerfile in frontend and backend repositories without having to deal with messy Dockerfiles as shown above.


The only side-effect is that I have two more command lines to remember:


git submodule update --init --recursive
git submodule update --recursive --remote

The first one initializes the git submodules, and the second one gets the latest version of them.


Rewrite Dockerfile and docker-compose.yml


Now I can remove all the git clone commands from my Dockerfiles, and I'll keep the third repository for the final docker-compose.


Here is my current frontend Dockerfile:


FROM node:14-alpine
WORKDIR /app
COPY ./ /app/

#Environment variable
ENV REACT_APP_API_URL="http://localhost:8000/api/v1"
ENV REACT_APP_DEBUG=0
ENV REACT_APP_TITLE="Playlist Share"
ENV REACT_APP_MAX_ITEM_PER_PAGE=10

RUN npm install
RUN npm run build
RUN npm install -g serve

EXPOSE 3000
CMD serve -s build

And my backend Dockerfile


FROM php:8.2-cli
RUN apt-get update -y && apt-get install -y openssl zip unzip git libonig-dev libmcrypt-dev libsqlite3-dev
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
COPY .env.example .env

# Environment variables
ENV APP_ENV=production
ENV APP_DEBUG=false
ENV LOG_CHANNEL=errorlog
ENV LOG_LEVEL=warning
ENV APP_URL=http://localhost:8000
ENV FRONTEND_URL=http://localhost:3000

# Environment variables for admin user
ENV ADMIN_NAME=test
ENV ADMIN_EMAIL=test@test.fr
ENV ADMIN_PASSWORD="12345678"

RUN composer install
RUN php artisan migrate --force
RUN php artisan key:generate
# caching stuff for production
RUN php artisan cache:clear
RUN php artisan config:cache
RUN php artisan route:cache
RUN php artisan view:cache

EXPOSE 8000
CMD php artisan serve --host 0.0.0.0 --port=8000

Don't forget to include RUN php artisan key:generate in your Laravel Dockerfile. If you skip this step, your backend will return a 500 error consistently.


Additionally, I use the --force argument in php artisan migrate to create the database if it doesn't exist (since I'm using SQLite database).


Production time


To put it into production, I am first thinking of including the web server directly in the docker-compose.yml file. It is good for some cases, especially when you have only one server with this application, but for my usage, it's not really ideal. I talked about it in my last post[1].


I keep the nginx service in the docker-compose.yml file, but I will simply comment out the line related to it. The current configuration file looks like this:


version: '3'

services:
#nginx:
#    image: nginx:latest
#    container_name: production_nginx
#    volumes:
#    - ./nginx.conf:/etc/nginx/nginx.conf
#    ports:
#    - 80:80
#    - 443:443

backend:
    build:
    context: ./backend
    dockerfile: Dockerfile
    image: playlist_share_backend:0.1
    container_name: playlist_share_backend
    volumes:
    - /app/database/
    expose:
    - "8000"
    ports:
    - "127.0.0.1:8000:8000"

frontend:
    build:
    context: ./frontend
    dockerfile: Dockerfile
    image: playlist_share_frontend:0.1
    container_name: playlist_share_frontend
    expose:
    - "3000"
    ports:
    - "127.0.0.1:3000:3000"

For the ports line, it is allowed for localhost to access the port in question, but it cannot be accessed from outside. So, it's a win-win situation in my case.


Here is the nginx configuration file in case the nginx server is included directly in Docker:


events {}
http {
    server {
        listen 80;
        server_name  localhost:80;

        location / {
            proxy_pass http://frontend:3000;
        }

        location /backend {
            proxy_pass http://backend:8000;
            rewrite ^/backend(.*)$ $1 break;
        }
    }
}

In the nginx proxy config in Docker, you need to use the container name in the URL (e.g., http://backend:8000) instead of using localhost. If you mistakenly use localhost, you'll encounter a nice 502 Bad Gateway error, and it can be quite frustrating if you're not aware of this - like I was - and you might get stuck for hours trying to figure out the issue.


After going through all of this (which took me several days to figure out), the final command line, and everything works together smoothly (at least for me), is:


docker-compose up --build

And that's it.


To update the container, for now, I use:


docker-compose up --force-recreate --build -d

I'm not sure if there's a better way, but currently, it is sufficient for my needs.


Conclusion (kind of)


It was a cool journey to get here, but that's just the beginning of it. I still have to learn about CI/CD pipelines, cloud computing, and all the other fascinating and time-consuming tech topics. Development is a non-stop learning process, and that's why I love developing stuff so much.


There is one online instance[2] with some data in it - it is my personal instance, my development instance, my test instance, and... well, you get it. You can't test it for now, I'm sorry for that, but I will continue to work on it and eventually get one public instance for you to test.


So, keep developing stuff, sharing knowledge, and, above all else, take care of yourself.


1: https://blchrd.eu/2023/07/28/migration-from-apache-to-nginx/

2: https://plsh.blchrd.eu

-- Response ended

-- Page fetched on Fri May 10 11:41:30 2024