Run Systemd Script Before System Shutdown

For no particular reason I tried to retain the FastCGI cache (NGINX) of sysinfo.io plus have it persist across system reboots. In order to achieve this goal, I needed to create a shell script, define a new Systemd service unit, and find a way to run the Systemd script before system shutdown (reboots are included via “Before=” declaration). This website is configured to store the NGINX cache in its tmpfs file system, which is a special Linux in-memory file system that exposes and abstracts itself typically to a directory such as /run or /tmp. When the system is restarted or crashes the tmpfs file system is lost along with all its data. If you didn’t already know that, this article may be slightly too advanced. But then again there was a time when I didn’t know what :%s/\/dev\/null 2>&1/\/dev\/null/g meant in vi. If you agree those regex escape sequences/delimiters are stupid-ugly, look at my other post Ubuntu 16.04 Move Docker Root.

The following was ran directly on the front-end web server which is currently running Ubuntu 16.04.6 LTS with kernel version 4.4.0.

root@nginx03:~# df -hT
Filesystem                 Type      Size  Used Avail Use% Mounted on
udev                       devtmpfs  476M     0  476M   0% /dev
tmpfs                      tmpfs     249M   75M  174M  30% /run
/dev/mapper/vg--group-root ext4       28G   24G  3.7G  87% /
tmpfs                      tmpfs     497M     0  497M   0% /dev/shm
tmpfs                      tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                      tmpfs     497M     0  497M   0% /sys/fs/cgroup
/dev/loop0                 squashfs   92M   92M     0 100% /snap/core/6531
/dev/loop2                 squashfs  7.5M  7.5M     0 100% /snap/canonical-livepatch/71
/dev/loop3                 squashfs  7.5M  7.5M     0 100% /snap/canonical-livepatch/69
/dev/loop4                 squashfs   90M   90M     0 100% /snap/core/6673
/dev/loop5                 squashfs   91M   91M     0 100% /snap/core/6405
/dev/sda1                  ext2      472M  108M  340M  25% /boot
/dev/loop6                 squashfs  7.5M  7.5M     0 100% /snap/canonical-livepatch/74
tmpfs                      tmpfs     100M     0  100M   0% /run/user/0
tmpfs                      tmpfs     100M     0  100M   0% /run/user/65534

As you can see in line 4 tmpfs is at /run for me. Normally /run is about 100MB, I have already increased its size because I had planned a long time ago that I would be running NGINX FastCGI cache in-memory. The approach I took for this is to modify the entry in /etc/fstab and changed the size parameter to use a percent value of 25 like so:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#              
/dev/mapper/vg--group-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=2244b358-1786-49cd-b539-13600632dfa8 /boot           ext2    defaults        0       2
/dev/mapper/vg--group-swap_1 none            swap    sw              0       0
//ad.sysinfo.io/Public/DFS /var/www/sysinfo.io/dfs cifs credentials=/root/.smb-visualblind-ro,uid=www-data,gid=www-data,ro,vers=1.0,uid=33,forceuid,gid=33,forcegid,file_mode=0444,dir_mode=0555 0 0
//ad.sysinfo.io/Public/DFS/visualblind/Documents/Scripts /var/www/sysinfo.io/scripts cifs credentials=/root/.smb-visualblind-ro,uid=www-data,gid=www-data,ro,vers=1.0,uid=33,forceuid,gid=33,forcegid,file_mode=0444,dir_mode=0555 0 0
//ad.sysinfo.io/Public/DFS/FTP /mnt/windows/ftp-rw cifs credentials=/root/.smb-visualblind-rw,uid=www-data,gid=www-data,rw,vers=1.0,uid=33,forceuid,gid=33,forcegid,file_mode=0744,dir_mode=0755 0 0
tmpfs   /run    tmpfs   rw,nosuid,noexec,relatime,mode=755,size=25%
freenas:/mnt/pool1/Dataset1     /mnt/pool1/Dataset1     nfs     auto    0       0

My server setup isn’t great. I’ve done my best to stretch its computational/memory/hybrid SSD/magnetic HDD subsystems to its limits. It’s actually a Frankenstein machine w/ OEM desktop hardware running ESXi 6.7, ITX form-factor eVGA mobo, i5 6th gen so no HT, +2 5TB HDD, +2 10TB HDD, +2 SSD (Virtual flash read cache, Host cache, FreeNAS read cache), LSI Logic SAS/SATA PCIe (IT F/W SAS2Flash), and I’ve stuffed the only two DDR4 slots it has with 32GB.

Systemd Definitions

/lib/systemd/system/nginx-cache-backupd.service

[Unit]
Description=Save nginx-cache at reboot
DefaultDependencies=no
#After=final.target
Before=shutdown.target reboot.target halt.target

[Service]
Type=oneshot
#RemainAfterExit=yes
ExecStart=/usr/local/bin/nginx-cache-backup.sh start
ExecStop=/usr/local/bin/nginx-cache-backup.sh stop
ExecReload=/usr/local/bin/nginx-cache-backup.sh reload

[Install]
WantedBy=final.target

Once this is in place:

systemctl daemon-reload
systemctl enable nginx-cache-backupd.service

When you enable the service it creates a symlink at /etc/systemd/system/final.target.wants/<ServiceName>.service.

Shell Script

/usr/local/bin/nginx-cache-backup.sh
#!/bin/bash

NGINXCACHEROOT="/root/nginx-cache"
NGINXCACHETMPFS="/run/nginx-cache"
PIDFILE="/tmp/nginx-cache-backupd.pid"

function d_start ( )
{
        echo  "nginx-cache-backupd: starting service"
#       nginx-cache-backupd --pidfile = $PIDFILE


        if [ ! -d $NGINXCACHEROOT ]
        then
        /bin/cp -R $NGINXCACHETMPFS $NGINXCACHEROOT
        else
        /bin/rm -r $NGINXCACHEROOT
        /bin/cp -R $NGINXCACHETMPFS $NGINXCACHEROOT
        fi

logger --journald << EOF
MESSAGE_ID=67feb6ffbaf24c5cbec13c008dd72304
MESSAGE=Logging syslog entry upon ExecStart
SYSTEMD_UNIT="nginx-cache-backupd.service"
EOF

        echo  "PID is $(cat $PIDFILE)"
}

function d_stop ( )
{
        echo  "nginx-cache-backupd: stopping Service (PID = $(cat $PIDFILE))"
        kill $(cat $PIDFILE)
        #echo $PID
        #kill $PID
        rm $PIDFIL

        if [ ! -d $NGINXCACHEROOT ]
        then
        /bin/cp -R $NGINXCACHETMPFS $NGINXCACHEROOT
        else
        /bin/rm -r $NGINXCACHEROOT
        /bin/cp -R $NGINXCACHETMPFS $NGINXCACHEROOT
        fi

logger --journald << EOF
MESSAGE_ID=67feb6ffbaf24c5cbec13c008dd72391
MESSAGE=Logging syslog entry upon ExecStart
SYSTEMD_UNIT="nginx-cache-backupd.service"
EOF
        echo  "PID is $(cat $PIDFILE)"

}

function d_status ( )
{
        ps  -ef  |  grep nginx-cache-backupd |  grep  -v  grep
        echo "PID indicate indication file $(cat $PIDFILE 2>/dev/null)"
        echo "Size of /run/nginx-cache directory: du -sh /run/nginx-cache/"
        echo "Size of /root/nginx-cache directory: du -sh /root/nginx-cache/"
}

# Some Things That run always
touch /tmp/nginx-cache-backupd.pid

# Management instructions of the service
case  "$1"  in
        start )
                d_start
                ;;
        stop )
                d_stop
                ;;
        restart|reload )
                d_stop
                sleep  1
                d_start
                ;;
        status )
                d_status
                ;;
        * )
        echo  "Usage: $ 0 {start | stop | reload | status}"
        exit  1
        ;;
esac

exit  0

Don’t forget to chmod +x the script and test out its case clauses by appending the script with status. stop and start. I verified the script did execute just prior to reboot or the shutdown sequence by implementing very rudimentary syslog logging via logger with:

logger --journald << EOF
MESSAGE_ID=67feb6ffbaf24c5cbec13c008dd72304
MESSAGE=Logging syslog entry upon ExecStart
SYSTEMD_UNIT="nginx-cache-backupd.service"
EOF

Both the start and stop sequence have different ID’s so later I can identify WTF happened in the OS syslog.

However I must admit all of this effort was ultimately in vain, because the retard I am did not research first whether or not NGINX keeps any persistent internal record of the index of cached files. It turns out that no, NGINX does not and therefore backing up the cache just prior to reboot sequence, moving it back to original location at next runlevel 5, and starting nginx.service does not allow NGINX to simply start-up with pre-generated FastCGI/PHP cache. Hopefully my wasted time and effort is to your advantage. Just too bad the only way to increase your skillset in the professional IT arena is by actually doing…failing…and trying again. The point is, never give up.


References:

https://www.ubuntudoc.com/how-to-create-new-service-with-systemd/
https://askubuntu.com/questions/51145/how-to-run-a-command-before-the-machine-automatically-shuts-down

https://bash.cyberciti.biz/guide//etc/init.d
https://unix.stackexchange.com/questions/39226/how-to-run-a-script-with-systemd-right-before-shutdown
https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/
https://www.cyberciti.biz/faq/linux-unix-sysvinit-services-restart-vs-reload-vs-condrestart/

Share
Disqus Comments Loading...

Recent Posts

FreeNAS Error Creating Pool

command '('gpart', 'create', '-s', 'gpt', '/dev/da8')' returned non-zero exit status 1. If you get this error while trying to create… Read More

June 7, 2019 3:44 pm 15:44

Change Grub Default Boot Entry on Linux Mint

I'm dual booting Windows and Linux Mint on my laptop. The grub default is to boot into Linux Mint, however… Read More

April 23, 2019 7:45 pm 19:45

How to Reset Secure Channel On Active Directory Domain Controller

When you're a little too careless about virtualizing your domain controllers, cloning, migrating, backing up and restoring, returning from vacation… Read More

April 21, 2019 8:14 am 08:14

Learn Systemctl Usage to Manage Systemd Service in Linux

Systemd is new service manager for Linux. It's a replacement for all previous init systems (SysV/SysVinit & Ubuntu's Upstart) and… Read More

April 20, 2019 7:55 am 07:55

Force Delete Windows Server DHCP Failover Relationship

If you've found yourself here then chances are you messed up one of your domain controllers or at least one… Read More

April 20, 2019 5:54 am 05:54

Determine Your Upstream DNS Resolver

The following one-liner Bash will output your upstream DNS resolver. You will need to install the whois package for this… Read More

April 7, 2019 11:00 am 11:00