15/01/2026

Amazon Linux 2003 is the devil and you should not use it

I always been a RedHat boy, the first GNU/Linux distro I seriously used was a RedHat Linux 5.0 (please note I’m not talking about RedHat Enteprise Linux, I’m talking about the old RedHat Linux distribution, before RHEL was born) and since then I always tried to stick to the RedHat side of the Linux world, even now If I have to choose a distro I’ll choose Rocky Linux over Debian or Ubuntu.

When I started working on AWS many years ago I tried Amazon Linux 2, and It was good, more or less It was like CentOS 7 and everything was ok… then came Amazon Linux 2003.

I didn’t chose It, someone else did and passed the instance to me, and since the beginning something was not right…

You can’t use EPEL…

You can’t find a lot of RHEL/CentOS/Fedora/Rocky/Alma packages on it…

You can’t even run dnf-automatic or yum-cron to automatically install updates on a scheduled base… updates are shipped in batches and you have to update the whole OS with all the updated packages… manually.

And today I found that even the damn flippin’ cron do not work, you have to manually install it with

sudo yum install cronie -y
sudo systemctl enable crond.service

What on earth?!?!?!?!

No, seriously if you have to choose a RHEL based GNU/Linux distributio choose Fedora, CentOS Stream, Rocky Linux, Alma Linux, Oracle Linux but DO NOT CHOOSE AMAZON LINUX 2003…

Amazon Linux 2003 is the devil of GNU/Linux distros, probably one of the worst distros ever made.

15/11/2025

Broadcom stupidity and a simple Tomcat setup

For those who live under a rock, during the last year one company emerged like the new villain in town, and this company is Broadcom.

Among various stupid things Broadcom management did, recently they screwed up a very nice project made by Vmware (which is now part of Broadcom) called Bitnami.

Bitnami project essentially produced some very cool and well made container images with tons of very useful software and services with a nice and clean setup and documentation, basically the stupid monkeys in charge of Broadcom management decided to introduce more and more restrictions on Bitnami images to push people to pay subscriptions for images which were free since then… and please notice that those images are based on software and services which are totally free, so Broadcom is charging a fee on something they get for free.

They have to rot in hell…

Anyway, back to our topic; one of the Bitnami images I used a lot is the Apache Tomcat servlet container one, and I love Tomcat. It’s light it’s powerful, it’s one of the pillars of the IT industry, way better than many enterprise Java application servers.

So because of Broadcom stupidity I started to get rid of my beloved Bitnami image and get back to the official Tomcat docker image.

Here’s a quick list of commands to setup a simple Tomcat server with docker containers with Tomcat manager applications and 1 GB of heap memory.

On this setup I used Tomcat 11.0.10, feel free to change the tag to whatever version of Tomcat you prefer, check the official Tomcat page on dockerhub for your desired tag.

mkdir -p /data/docker/tomcat ; cd /data/docker/tomcat
docker run -d --rm --name tomcat tomcat:11.0.10
docker cp -a tomcat:/usr/local/tomcat/webapps.dist ./webapps
docker cp tomcat:/usr/local/tomcat/conf/tomcat-users.xml .
docker stop tomcat
rm -rf webapps/docs webapps/examples/ webapps/sample
sed -i '$d' tomcat-users.xml
echo '<role rolename="manager-gui"/>' >> tomcat-users.xml
echo '<user username="tomcat-admin" password="CHANGEME" roles="manager-gui"/>' >> tomcat-users.xml
echo '</tomcat-users>' >> tomcat-users.xml
sed -i 's/127/\\d+/g' webapps/manager/META-INF/context.xml

cat << 'EOF' > docker-compose.yaml
services:
  web:
    image: tomcat:11.0.10
    container_name: tomcat
    ports:
      - "8080:8080"
    environment:
      - JAVA_OPTS=-Xms1024m -Xmx1024m
    volumes:
      - ./webapps:/usr/local/tomcat/webapps
      - ./tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml:ro
    restart: unless-stopped
EOF

docker compose up -d

That’s all

I know the pros will argue that I should build an image with my tomcat-users.xml file and the modified context.xml file, but I hate building images for no reasons or when I have simpler and more clean alternatives.

And by the way obviously you should not expose to the http connector to the web, use a simple reverse proxy with Apache httpd or Nginx to publish ONLY your applications contexes.

26/10/2025

Kubernetes is simple… or not

It’s been years since I started working with K8s, I lost count on how many cluster I worked on… and really I don’t understand why people is so mad at it, It’s really simple…

You can run services on K8s… except they are not what you think.

Your services in reality run on pods… which are not containers, but sandboxes that contain containers…

And inside each pod you can have how many containers as you want… but not all of those are used to run services, some are sidecars, others are temporary containers, other run services on them…

And you should not create pods and containers inside them, you’re supposed to create other objects called deployments and statefulsets… which create other objects again called replicasets… which create your pods… which have your containers inside…

And deployments and statefulsets are similar in some sense and different in others, your supposed to use statefulsets if you want to run services with persistent data and deployments if you are running “ephemeral” services… however you can run deployments with persistent data anyway…

Deployment can be configured to run several instances to your pod and scale your applications… and also statefulsets can do the same… except you end up screwing everything because your create a split brain architecture…

But let’s back to services, K8s services, which are not services but network endpoints that balance requests on your pods… but there are also ingresses that can do the same… but you must use services for some case and ingresses for others and use specific types of services for specific use cases…

And as I said you can persist data with persistent volumes… which you can’t manage yourself because you need persistent volume claims… which can do nice things like delete your data without any warning because they depend on storage classes… which are some sort of driver to use some kind of storage to save your data, but also dictates how storage must be used and which policies and rules have to be followed while interacting with your data and storage…

And then you have backups… except nobody gives a shit about backups in the K8s world… so you don’t have any backup measure or strategy out of the box and you have to rely on third party tool you have to trust…

And you can’t either mount your persistent volume on your pc and do a cold backup, because if you start your containers your applications will access data and that would be an inconsistent hot backup, and if you kill your application processes the pod cease to exist and you can’t access your data… so you have to create a temporary container inside a temporary pod which mounts the persistent volume while your application pod is down to manually create a cold backup… isn’t it simple?

So yes, Kubernetes is simple

13/10/2025

Update Windows applications with winget

Here’s a quick tip to update your Windows applications using Winget.

If you don’t know this tool winget is the Windows Package Manager available since Windows 10 1809 (build 17763). It works like charm and can get your life waaaaaaayyy easier to keep your system updated.

Basically to update all your Windows applications available through winget you have to open a command prompt as Administrator and launch

winget upgrade --all --silent --accept-source-agreements --accept-package-agreements

If you want to keep some of your applications pinned to a specific version and avoid to update them use

winget pin add <Vendor.Packagename> --blocking

To see the list of pinned packages use

winget pin list

To see the list of your packages use

winget list

08/03/2025

New Nagios project

I love Nagios, It’s the first monitoring service I started using and I loved it since the beginning.

I love it because no matter people say it’s simple, it’s reliable, it’s predictable, it’s simple to reproduce every check Nagios is doing and check yourself the results.

During the years I tried several other software, but I always found them inferior to Nagios, they were too bulky, too chaotic and too disorganized.

Since last autumn I finally was able to get back to my good old Nagios at work, I upgraded it, and I expand it to check new services, check the old ones in a better way and make the setup easier and easier.

I would like to start a new project to share how I use it, which checks I do, and hopefully show someone that the good old Nagios still has some arrows to shoot :)

Let’s start with a check I love and I never found in any other monitoring software: check_reboot_required

This check warns you if there’s a new kernel on you GNU/Linux machine that requires a reboot, it basically extract the running kernel version and the last kernel installed on the server, if they don’t match and the running kernel is older it turns on a warning alert, very simple and very effective to keep your server updated and secure.

Obviously this check is very important if you constantly run automatic updates (via yum-cron, dnf-automatic or unattended upgrades).

The check was written by Johan Ryberg and you can find it in his Github repo, It works perfectly fine on any RedHat based distribution and also on any Debian based distribution, I suggested a small change to make it works with Amazon Linux 2 also.

To use it you only have to

1. place the check_reboot_required file on the server you want to monitor in the Nagios plugins directory (/var/lib64/nagios/plugins on RHEL based distributions or /var/lib/nagios/plugins on Debian based distributions)

2. add this simple command to the nrpe config file (/etc/nagios/nrpe.conf) and restart the nrpe service.

command[check_reboot]=<NAGIOS PLUGINS PATH>/check_reboot_required -s $ARG1$

3. add the “Reboot required” service to your host in the Nagios server configuration

define service{
use generic-service
host_name server.domain.tld
service_description Reboot required
check_command check_nrpe!check_reboot!w
}

4. restart Nagios and enjoy

 

 

 

« Post precedenti