26/10/2025

Kubernetes is simple… or not

It’s been years since I started working with K8s, I lost count on how many cluster I worked on… and really I don’t understand why people is so mad at it, It’s really simple…

You can run services on K8s… except they are not what you think.

Your services in reality run on pods… which are not containers, but sandboxes that contain containers…

And inside each pod you can have how many containers as you want… but not all of those are used to run services, some are sidecars, others are temporary containers, other run services on them…

And you should not create pods and containers inside them, you’re supposed to create other objects called deployments and statefulsets… which create other objects again called replicasets… which create your pods… which have your containers inside…

And deployments and statefulsets are similar in some sense and different in others, your supposed to use statefulsets if you want to run services with persistent data and deployments if you are running “ephemeral” services… however you can run deployments with persistent data anyway…

Deployment can be configured to run several instances to your pod and scale your applications… and also statefulsets can do the same… except you end up screwing everything because your create a split brain architecture…

But let’s back to services, K8s services, which are not services but network endpoints that balance requests on your pods… but there are also ingresses that can do the same… but you must use services for some case and ingresses for others and use specific types of services for specific use cases…

And as I said you can persist data with persistent volumes… which you can’t manage yourself because you need persistent volume claims… which can do nice things like delete your data without any warning because they depend on storage classes… which are some sort of driver to use some kind of storage to save your data, but also dictates how storage must be used and which policies and rules have to be followed while interacting with your data and storage…

And then you have backups… except nobody gives a shit about backups in the K8s world… so you don’t have any backup measure or strategy out of the box and you have to rely on third party tool you have to trust…

And you can’t either mount your persistent volume on your pc and do a cold backup, because if you start your containers your applications will access data and that would be an inconsistent hot backup, and if you kill your application processes the pod cease to exist and you can’t access your data… so you have to create a temporary container inside a temporary pod which mounts the persistent volume while your application pod is down to manually create a cold backup… isn’t it simple?

So yes, Kubernetes is simple

13/10/2025

Update Windows applications with winget

Here’s a quick tip to update your Windows applications using Winget.

If you don’t know this tool winget is the Windows Package Manager available since Windows 10 1809 (build 17763). It works like charm and can get your life waaaaaaayyy easier to keep your system updated.

Basically to update all your Windows applications available through winget you have to open a command prompt as Administrator and launch

winget upgrade --all --silent --accept-source-agreements --accept-package-agreements

If you want to keep some of your applications pinned to a specific version and avoid to update them use

winget pin add <Vendor.Packagename> --blocking

To see the list of pinned packages use

winget pin list

To see the list of your packages use

winget list

08/03/2025

New Nagios project

I love Nagios, It’s the first monitoring service I started using and I loved it since the beginning.

I love it because no matter people say it’s simple, it’s reliable, it’s predictable, it’s simple to reproduce every check Nagios is doing and check yourself the results.

During the years I tried several other software, but I always found them inferior to Nagios, they were too bulky, too chaotic and too disorganized.

Since last autumn I finally was able to get back to my good old Nagios at work, I upgraded it, and I expand it to check new services, check the old ones in a better way and make the setup easier and easier.

I would like to start a new project to share how I use it, which checks I do, and hopefully show someone that the good old Nagios still has some arrows to shoot :)

Let’s start with a check I love and I never found in any other monitoring software: check_reboot_required

This check warns you if there’s a new kernel on you GNU/Linux machine that requires a reboot, it basically extract the running kernel version and the last kernel installed on the server, if they don’t match and the running kernel is older it turns on a warning alert, very simple and very effective to keep your server updated and secure.

Obviously this check is very important if you constantly run automatic updates (via yum-cron, dnf-automatic or unattended upgrades).

The check was written by Johan Ryberg and you can find it in his Github repo, It works perfectly fine on any RedHat based distribution and also on any Debian based distribution, I suggested a small change to make it works with Amazon Linux 2 also.

To use it you only have to

1. place the check_reboot_required file on the server you want to monitor in the Nagios plugins directory (/var/lib64/nagios/plugins on RHEL based distributions or /var/lib/nagios/plugins on Debian based distributions)

2. add this simple command to the nrpe config file (/etc/nagios/nrpe.conf) and restart the nrpe service.

command[check_reboot]=<NAGIOS PLUGINS PATH>/check_reboot_required -s $ARG1$

3. add the “Reboot required” service to your host in the Nagios server configuration

define service{
use generic-service
host_name server.domain.tld
service_description Reboot required
check_command check_nrpe!check_reboot!w
}

4. restart Nagios and enjoy

 

 

 

23/11/2024

Centos 8 multiple httpd instances

This is an old post I had in draft since… I don’t know, maybe years.

Anyway, CentOS 8 is old and out of support, but still I think running several instances of Apache httpd server through systemd can be useful today in other modern Linux distributions, so I think it’s time to clean up the drafts and publish it.

—————————————————

Recently a customer asked me to setup a webdav access to a vm to change some files inside a couple of java web applications deployed on a Tomcat instance.
My first choice was to configure the webdav servlet already available with Tomcat, which sounds like a nice and elegant solution, but that went wrong because webdav http methods were blocked by some Spring waf protection, and change the java applications for that was not an option (for various reasons I will not explain but not technical ones).

At this point I thought to create a new virtualhost for the webdav access, but in that case file ownership would be a problem, and running the frontend webserver with write permission on all the applications resouces was not a good idea.
The solution was simple, setup a new webserver running as tomcat user (Tomcat file owner) on a different port (and also only available on lan) but I found no documentation with some of the latest distros using systemd.
Yes I know, I could install nginx or another webserver, but running a new Apache configuration felt much more elegant to me.

First of all, create a new httpd systemd unit copying the old one with a new name

cp -v /usr/lib/systemd/system/httpd.service /usr/lib/systemd/system/httpd-dav.service

Copy the main httpd config file for the new webserver

cp -v /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd-dav.conf

Enable the new systemd unit to start at boot

systemctl enable httpd-dav.service

To check if it’s ok run this

systemctl list-unit-files | grep httpd-dav

Now you must edit the new systemd unit to use the new httpd-dav.conf file

systemctl edit httpd-dav.service

…and add this

[Service]
Environment=OPTIONS="-f /etc/httpd/conf/httpd-dav.conf"

Now you must edit the new httpd-dav.conf changing some basic directives to not overlap the main Apache configuration:
In my case I changed Listen PidFile, User, Group, ErrorLog, CustomLog, I removed “IncludeOptional conf.d/*.conf” and added a new virtualhost with mod_dav active, basic authentication, etc etc…
Adjust your Apache configurations as you need, but at least you have to change the Listen and PidFile directives to avoid conflicts with the other httpd process.

When you’re ready you only have to start it with

systemctd start httpd-dav.service

I hope this can be helpful.

27/10/2024

Change Bookstack url and context

I love Bookstack, actually I think it’s one of the best wiki project existing.

It’s well documented, it works like charm, the developer is very active (and he’s also a very kind person, which has nothing to do with the software, but it’s always a pleasure interact with him) and it has very nice features:

  • a nice and responsive design
  • drafts autosave
  • MFA out of the box
  • diagrams.net integration

It also works perfectly fine in a docker container, technically the official project do not offer a container image, but there are two groups building them and they’re referred directly in the official documentation.

Recently I started to sort things out on my beloved Raspberry PI 5, in particular I’m moving services so I can reverse proxy them on a single Apache httpd instance (you know I still love Apache :D ), today I moved around Bookstack, in particulare I did two things:

  1. change Bookstack hostname (for example from https://site.domain.tld to https://newsite.domain.tld )
  2. make Bookstack work under a specific url context (for example https://site.domain.tld/bookstack instead of https://site.domain.tld ).

On my environment I’m using the LinuxServer.io docker image, so check the project site for details, and also I’m using docker compose, if you’re not familiar with it start using it for Reorx’s sake.

Backup

First of all take a damn backup, it’s mandatory.

Seriously I’m not joking.

Stop the containers

cd /data/docker/bookstack ; docker compose down

Backup files with a simple tar, restic, kopia, whatever you want, but DO IT!

cd /data/docker/ ; tar -cpzf /backup/bookstack-backup.tar.gz bookstack

Change Bookstack hostname

This process is documented on the Bookstack documentation (LINK), but still I decided to mention it because the procedure is a little bit different on a docker container, so it’s worth spent a few words about it.

First of all you have to change the APP_URL configuration variable, in case of a docker container it’s enough to change the environment variable on the docker-compose.yaml file, so open the file and change the variable to the new url

Now you must replace the old url from the database record with the new one using the bookstack:update-url command, in case of a docker container you must identify where’s the Laravel framework artisan file and launch it accordingly to the documentation.

docker exec -it bookstack php /app/www/artisan bookstack:update-url https://site.domain.tld https://newsite.domain.tld

After that clear the cache using

docker exec -it bookstack php /app/www/artisan cache:clear

Restart the docker container to change the environment variable you previously changed with the new url.

cd /data/docker/bookstack ; docker compose down ; docker compose up -d

Done, now your Bookstack instance should be reachable to the new url.

 

Change Bookstack root context

This change is a little bit tricky, because it involves some webserver changes.

First of all you must repeat the same process used for changing the url hostname of your Bookstack instance, this time including the context you want to use (for example /bookstack ).

Let’s quickly review the steps:

1) Change the APP_URL environment variable in the docker-compose.yaml (APP_URL=https://newsite.domain.tld/bookstack in this case)

2) Replace the url in the database using the bookstack:update-url

docker exec -it bookstack php /app/www/artisan bookstack:update-url https://newsite.domain.tld https://newsite.domain.tld/bookstack

3) Clear Bookstack cache

docker exec -it bookstack php /app/www/artisan cache:clear

4) Restart the docker container to change the environment variable you previously changed with the new url.

cd /data/docker/bookstack ; docker compose down ; docker compose up -d

Now you must review the webserver configuration inside your docker container, in case of the LinuxServer.io container there’s a nginx instance running inside the container, you can fine its configuration inside the /config/nginx/ directory inside the container.

If you followed the LinuxServer.io recommendations the /config directory should be a persistent volume (or a persistent path on your docker host), so any changes in the nginx configuration files should not be lost in case of a container restart.

In my case the config persistent volume is located in the /data/docker/bookstack/bookstack-config directory, so the nginx configuration is located in the file /data/docker/bookstack/bookstack-config/nginx/site-confs/default.conf.

Apply this patch

wget https://tasslehoff.burrfoot.it/pub/bookstack-nginx.patch ; \
patch /data/docker/bookstack/bookstack-config/nginx/site-confs/default.conf < bookstack-nginx.patch

Reload nginx configuration

docker exec -it bookstack nginx -s reload

Done, now your Bookstack instance should work at the new url https://newsite.domain.tld/bookstack

« Post precedenti