logo
Jiff Slater
🤔 About
✍️ Contact
📚Knowledge
28 Jul 2021
 

Migrating to a static site
12 July 2021

I’ve long authored this blog in WordPress because I’ve found the interface homely and easy to maintain.  However, with the advent of cheaper external devices like Raspberry Pis, I wanted to move to a static site and host the website via CDN at my own IP dynamic address, eschewing the need for using expensive hosting solutions. The end goal for this was to move to a more self-rolled knowledge base website where I could store posts in a filesystem hierarchy rather than in a database.

Here’s how I migrated.

Proof of concept post

Generating a test blog post

As I didn’t want to write pure HTML as I felt it led to a bit of an unstructured format that couldn’t be parsed later or used for heavy cross-linking, I explored some options for authoring the files.

reStructedText

First I considered reStructuredText. This is a markup format that was originally created to write Python docs and later, due to its flexibility, became popular for authoring other types of documents. Here’s what a basic *.rst file looks like.


==================
Welcome to my blog
==================

Here's a *few* pointers to get you started.

- My main blog: `plkt.io <https://plkt.io>`_.
- My preferred search engine: `DuckDuckGo <https://ddg.co>`_.

It can be converted into HTML with pandoc.

$ pandoc –tab-stop 2 -f rst -t html sample.rst

<h1 id="welcome-to-my-blog">Welcome to my blog</h1>
Here's a <em>few</em> pointers to get you started.
<ul>
<li>My main blog: <a href="https://plkt.io">plkt.io</a>.</li>
<li>My preferred search engine: <a href="https://ddg.co">DuckDuckGo</a>.</li>
</ul>

A very clear and straightforward way to write blog posts in the terminal but I felt I wasn’t really getting that much more of an advantage from writing directly in the WordPress text editor because there wasn’t any semantic mark up.

DocBook

Next, I looked up DocBook which is heavily used as an authoring format for writing books and technical documents. As this is the primary content on this blog, it seemed worth exploring. At first, I saw many old examples online of how to author a simple document and was immediately horrified by the referenced to the Document Type Declaration (DTD) that harkens from old HTML+XML days.


<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC '-//OASIS//DTD DocBook XML V4.5//EN'
'http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd'>
<article lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>
</article>

Fortunately, according to the latest DocBook 5.0 standard, this arcane incantation wasn’t required and what we have today is something like this, notice the xmlns portion of the article element. As a side note, I found out that the Linux Kernel documentation has started migrating away from DocBook to using Sphinx + reStructuredText. Read more about this here.


<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>
</article>

So, after spending about 30 minutes using the documentation, I managed to re-write the *.rst example above into the following.


<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en"
xmlns:xlink="http://www.w3.org/1999/xlink">
<info>
<title>Welcome to my blog</title>
</info>
<section>
<title>Welcome to my blog</title>
<para>Here's a <emphasis>few</emphasis> pointers to get you started.
</para>
<itemizedlist mark='dash'>
<listitem>
<para>My main blog <link xlink:href="https://plkt.io">plkt.io</link>
</para>
</listitem>
<listitem>
<para>My preferred search engine <link xlink:href="https://ddg.co">DuckDuckGo</link>
</para>
</listitem>
</itemizedlist>
</section>
</article>

Quite a bit more verbose and a bit of a pain to type in Vim. Omni-completion C-X,C-O helped a lot in closing tags. I can understand a bit better why reStructuredText, although not as structured as this, reduces the initial hump to get started so will likely result in more up to date documentation.

Selecting a winner

From the explorations above, I finalised the decision on reStructuredText for the mark up format. However, before fully embracing the format, I needed to land on the structure of the site. I had decided that the content itself would serve as the structure rather than living within the structure. Put another way, I decided that one would read my content to determine the taxonomy rather than the content be defined by the taxonomy. This meant that hyperlinking would be very important (and very manual) so I would need to make a stronger effort to keep related documents connected to each other.

Organisation

I envisioned my static site to enventually converge to something resembling a MediaWiki site.

The structure would be like this:

This seemed like a good first pass for the organisation and something that I could change in the future without too much effort.

Creating the base site

I created the base directory structure and started defining the metadata for the site. As I was using pandoc, this would be split across three files: the template file for the HTML generator, the metadata file to contain variables across the entire site, and the YAML metadata at the top of each *.rst file that contained file specific references.

As Pandoc doesn’t support YAML headers in *.rst files (yet, see here) I’m storing all my YAML metadata in a side file called *.yaml for each post. While it’s not ideal, it’s simple and maintainable in the makefile.

Makefile

It took me a couple hours to walk through the documentation and build this makefile. However, I feel that as this project becomes more complicated this will pay off. Here’s what I came up with as a basis.


# Makefile for plkt.io

VERSION = 0.1

PANDOC := pandoc
FIND := find
PANDOCOPTS := –tab-stop 2 -s –template ./template.txt -f rst -t html -M “lang: en” -M “css: ./style.css” -B header.html -A footer.html

# Note that $(shell <>) replaces newlines with spaces.

dir := ./
src := $(shell $(FIND) $(DIR) -name “*.rst”) # TODO: Do this using make rules.
targets_html := $(src:.rst=.html)

%.html: %.rst
@echo “Compiling” $<
@$(PANDOC) $(PANDOCOPTS) –metadata-file=$(basename $<).yaml $< > $@

all: build

build: $(targets_html)

# Not yet implemented. Supposed to build the site and tar it up for distribution.
dist: clean build
mkdir -p plkt-${VERSION}
for html in $(targets_html); do \
mv $$html plkt-$(VERSION)/; \
done
tar -cf plkt-${VERSION}.tar plkt-${VERSION}
gzip plkt-${VERSION}.tar
rm -rf plkt-${VERSION}

.PHONY: clean
clean:
@for html in $(targets_html); do \
echo “Cleaning” $$html; \
rm -f $$html; \
done

Headers and footers

Each file would need a header and footer to maintain some visual consistency across the site. To do this, I referenced the header with the -B flag and the footer with the -A flag.

Setting up apache

For the proof of concept, I opted to not use containerisation and instead just move the *.html and *.css files into the /var/www/html directorie.  Viewing the website at http://localhost:80 worked admirably.

Migrating the post history

Exporting the posts from WordPress was a bit tricky. I first tried to use Pandoc’s automatic converting functionality but then I realised I’d have to do it twice: download the HTML, convert to reStructuredText, and then convert it back to HTML.


pandoc -f html -t rst https://plkt.io/2019/11/30/returning-to-wordpress/

I then landed upon a better method. I modified the template for my website by removing the header, footer, and post listing, visited each page individually, and saved them using Firefox. This took about 15 minutes (probably less time that automating it). Then I shoved the posts into a “archived posts” category that I would move bit by bit into the reStructuredText format.

Summary

This exercise taught me a lot about the data storage formats. What is the right way to store my post history? — Should it be a multiplicity format that separates the presentation from the data or should each post stand the test of time as its own standalone file? I’m starting to lean towards the latter. I think it’s possible to have the best of both if scoped properly by having an “archive” section of your site. So go ahead and export that page and leave it up for eternity. Normal visitors can view your normal site with the latest formatting but patreons of antiquity can learn more about the cake is made.

This will be the last post written using WordPress. The next post on this site will be generating using pandoc and Vim :).

My favourite tweaks for Firefox
8 July 2021

I’ve been using Firefox for the good part of 15 years and over the years I’ve collected a few particular preferences for how I use the application. I’ve grouped them below and tried where possible to make them applicable to the latest available versions.

userChrome.css

In this file you can configure modifications to the stylesheet used to render the Firefox UI. It’s stored in your profile directory in a directory called chrome.

Before you create the file, go ahead and enable support for the file in about:config by setting the toolkit.legacyUserProfileCustomizations.stylesheets boolean preference to true.

In this file, I’ve configured the following:


@namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"); /* only needed once */

/* Pulled from https://support.mozilla.org/de/questions/1239330 and sets the minimum width of pinned tabs */
.tabbrowser-tab[pinned] {width: 45px !important;}
.tab-label-container[pinned] {visibility: hidden !important;}

/* Pulled from https://github.com/piroor/treestyletab/wiki/Code-snippets-for-custom-style-rules#for-userchromecss and hides the top tab bar */
#tabbrowser-tabs {
visibility: collapse !important;
}

/* Pulled from above website. This removes the large sidebar header specifically for Tree Style Tabs */
#sidebar-box[sidebarcommand="treestyletab_piro_sakura_ne_jp-sidebar-action"] #sidebar-header {
display: none;
}

/* Make that tab listing a bit smaller */
#sidebar {
min-width: 100px !important;
}

Company density

I took this one straight from userChrome.org so you can follow there. In a nutshell, you can edit the browser.uidensity setting in about:config and set it to one of the following options:

  • 0 – normal density
  • 1 – compact density (my selection)
  • 2 – touch density

Along with the above (while I can) I’ve turned off the new proton UI by setting the browser.proton.enabled to false. I imagine at some point this won’t work, so I’ve also configured the back up option which reduces the spacing between menu items by setting browser.proton.contextmenus.enabled to false.

Further settings in about:config.

  • browser.tabs.closeTabByDblclick - true - close tabs by double clicking on them
  • browser.tabs.insertAfterCurrent - true - always open a tab to the right of the active tab
  • browser.tabs.tabMinWidth - 94 - keep the minimum tab width to 94
  • widget.content.allow-gtk-dark-theme - true - enable custom gtk dark themes
Quickly gathering pulseaudio diagnostics
2 July 2021

Recently while using KDE Neon I found pulseaudio constantly crashing and relaunching during Zoom calls.  Unfortunately, in most instances this meant the audio stream was lost and I had to refresh the tab, interrupting my meeting.

To resolve this I had to turn on better logging for pulseaudio.  This is pretty simple to do with systemd as it’s managing a user instance of pulseaudio.  First, check the ExecStart line of the existing service.

systemctl --user cat pulseaudio.service

ExecStart=/usr/bin/pulseaudio --daemonize=no --log-target=journal

Add the new ExecStart line in an override file.  The first ExecStart is to reset the variable.

systemctl --user edit pulseaudio.service

ExecStart=

ExecStart=/usr/bin/pulseaudio --daemonize=no --log-target=journal --log-level=info

Check the logs using journalctl next time the crash happens.

journalctl --user --reverse --unit=pulseaudio.service

I found the following line that I think was causing the problem

Jul 02 09:26:02 light pulseaudio[27653]: Source alsa_input.usb-AKM_AK5371-00.analog-stereo idle for too long, suspending ...

To resolve this I commented the suspend on idle plugin in /etc/pulse/default.pa

# load-module module-suspend-on-idle

This resolved the issue. I emptied out the override file to stop the extra logging from happening.

Best btrfs + luks configuration for longevity with new hard drives
29 June 2021

I’m a big fan of btrfs as the filesystem of choice not only for backups but also for the root filesystem.  However, as I move around with my backups sometimes I want to restore them to different machines and I always hesitate when faced with the prospect of formatting, encrypting, and partitioning, and mounting a new drive.  I wanted to document how I do it so I can have some consistency as I move from hard drive to hard drive, SSD to SSD.

Selecting

In general, when I purchase hard drives I purchased two of the same kind.  I don’t usually like to go smaller than 2TB and recently I’ve been harbouring a certain liking for 4TB hard drives when I can get my hands on them.  I’m a big fan of Western Digital and also of Seagate, especially at these larger ranges.  You can view the WD Red series prices here (UK).

BackBlaze has a great website where they list the failure rates of their hard drives (link) so you can start here to prime your instincts.

Formatting

This will take a while.  Just like onboarding new RAM with memtestx86, I’ll test my hard drives for badblocks with, funnily enough, the badblocks tool.  Arch Linux has a great wiki article about it here.  Here are my default steps for new drives.

Run the smartctl self-tests.

smartctl -t short /dev/sda

smartctl -t conveyance /dev/sda # Used to check that the drive wasn’t damaged in transportation

And view the results

smartctl -A /dev/sda

Next, run the badblocks command.  You can speed this up by figuring out the block size of your hard drive.

hdparm -I /dev/sda | grep Sector # Get the block size, usually around 4096

badblocks -b 4096 -ws -o badblocks-dev-sda.txt /dev/sda

From this mailing list post (link), it seems that modern drives will automatically notice a bad block and re-allocate to another location.  So, before cursing the wind if you get a bad block, it’s probably valuable to run it again and see if the error heals itself.

At the expense of write speed, you can use hdparm to turn on the Write-Read-Verify support.  This should also result in a lower likelihood of being bitten by writing corrupted data.

Finally, we can use the wipefs tool to make sure there’s no filesystem signatures remaining.

wipefs -a /dev/sda

As shown in the encrypting section, if you really want to wipe the drive you can open the drive in raw dm-crypt mode and wipe it.

cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks

Partitioning

Usually, there’s no need to do anything fancy here.  If the drive is a root drive, I add an EFI partition, a swap partition, a root partition, and leave a small amount of space at the end of the hard drive unpartitioned.  The encryption step will be on the root partition (typically /dev/mapper/sda_luks3).  If it’s a data drive, I typically encrypt the entire drive and then make a single partition to use for the data.

parted /dev/sda

mklabel gpt

mkpart "EFI system partition" fat32 1MiB 301MiB

set 1 esp on

mkpart "EFI system partition" btrfs 301MiB -40960MiB

mkpart "Swap space" linux-swap -40959MiB -8192MiB

Encrypting

Depending on if the drive is a root drive or a data drive, the order of the this and the partitioning step can be swapped – especially if you need an EFI partition.

I’ve been using LUKS (Linux Unified Key Setup) for over a decade and I’ve found it rock-solid for hard drive encryption.  My settings of choice have evolved over the years and today I use a very modern set of options.  Cryptsetup, the administration interface to dmcrypt and LUKS, has a lot of options that let you really (shoot yourself in the foot) customise the configuration.  In general, for data drives I use keyfiles instead of passphrases and store an encrypted copy of the keyfile in a different building (virtual or otherwise) than the one in which the drive lives.  For root drives, I use a passphrase and store a copy of the LUKS header in a different building.

More recently, the dm-integrity (GitLab; Kernel; presentation) component enables even more assurance that your data is not silently becoming corrupted and I’ve started to enable that as well.  It’s noted as experimental so I use a very minor configuration that hasn’t (touch wood) caused problems to date.

One quick thing to note before diving into encryption – it’s important to establish your threat model when using encryption – for me, I’m simply trying to avoid any hiccups that could happen in the event that a drive is stolen.

First, I encrypt the drive using luksFormat.  I add a few custom commands for my particular risk tolerance of encryption strength.  If speed is a concern, you can first use cryptsetup benchmark and see which algorithms are most performant for your set up.  On my machine, argon2id + XTS was the best for reducing number of iterations on a key (if someone is trying to brute force your key) and for performance when reading and writing encrypted data.

Before doing the below, if you’re really paranoid, as per the cryptsetup FAQ (GitLab) you can over-write the hard drive with random data before getting started.

cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks

dd if=/dev/zero of=/dev/sda_luks oflag=direct status=progress

cryptsetup close sda_luks

cryptsetup --key-size 512 --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda

If it’s a data drive, I’ll generate a key-file and use that.

dd if=/dev/random of=./sda-keyfile bs=512

cryptsetup --key-size 512 --key-file ./sda-keyfile --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda

Next, I immediately back up the header and store it somewhere safe.

cryptsetup luksHeaderBackup --header-backup-file /root/sda_header.luksHeader /dev/sda

Now I can open the device and begin the partitioning and setting up the filesystem.

cryptsetup open --type luks2 /dev/sda sda_luks

Creating the filesystem

The final stage of this abstraction sandwich is to configure btrfs.  This is my favourite part: 1) because we’re almost done; and 2) because it’s the abstraction level I deal with the most.

A unique component of btrfs is that is presents a subvolume filesystem on top of the filesystem you see at the command line.  I usually don’t build the filesystem structure where the root filesystem is at top of the hierarchy, instead I create a tree like so:

/toplevel # the parent of all subvolumes

/toplevel/savestate # this is where snapshots are stored

/toplevel/rootfs # where the rootfs will go

/toplevel/data # where data is stored, in the case of a data drive

One thing I’ve learned from using btrfs is that if you don’t configure quotas, you will rarely know how much space you’ll save by deleting a given snapshots.  So when I usually enable that option when configuring the initial structure.

mkfs.btrfs --csum xxhash /dev/mapper/sda_luks
mkdir /toplevel
mount -o autodefrag=on,compress=lzo /dev/mapper/sda_luks /toplevel
btrfs quota enable /toplevel
btrfs subvolume create /toplevel/rootfs
btrfs subvolume create /toplevel/rootfs/gentoo
btrfs subvolume create /toplevel/savestate
btrfs subvolume create /toplevel/storage

Summary

And there you have it, a very simple but practical way of managing the longevity of your system with a systematic approach to formatting, partitioning, encrypting, and creating filesystems on your hard drives.

Update on this blog

Over the next four weeks, I’m going to be merging all my blog posts together on this single site.  I’ll also be moving away from WordPress to a static site generated primarily from reStructedText files.  This’ll free up this additional server and let me safely store all my information in a single git repository.

For the most part, the only visible change will be the additional posts available for viewing on this site.

Stay tuned!

Setting up Kanboard for local project management
13 May 2021

I’ve used Trello quite frequently in the past to manage my work items but wanted to move to a self-hosted version that didn’t data mine my every action or offer to integrate with various other services and companies that distract me from the work.  I came across Kanboard which is an open-source selfhosted kanboard software. In the installation instructions Docker was recommended but I opted to use the default server software that came with Debian.

Here’s how I did it.

Installing and configuring Apache, Postgresql, and PHP

First, we’ll configure the LAPP stack.

We’ll install the services and related PHP extensions.
# apt install apache2 php7.3 php-pgsql libapache2-mod-php
# apt install php-common php-xml php-gd php-mbstring php-json php-common php-zip php-fpm

Check that Apache2 started succesfully.  You should be able to view the normal configuration at http://localhost.
# systemctl status apache2

Next, we’ll enable PHP-FPM to have a separate user for each web site. FPM is the process manager for PHP.

Before we get started we’ll need to create a separate, isolated user for FPM.
# adduser --no-create-home --uid 5000 --disabled-password fpm-kanban

Start by creating a new pool.
# cd /etc/php/7.3/fpm/
# vim pool.d/kanban.conf
; Kanban pool
[kanban]
user = fpm-kanban
group = fpm-kanban
listen.owner = www-data
listen.group = www-data
listen = /run/php/php7.3-fpm-kanban.sock
pm.max_children = 100
pm = ondemand

Enable proxy and FPM support in Apache. Restart Apache afterwards and make sure FPM is running.
# a2enconf php7.3-fpm
# a2enmod proxy
# a2enmod proxy_fcgi
# systemctl restart apache2
# systemctl restart php7.3-fpm

Edit the php.ini configuration and turn off allow_url_fopen to reduce the attack surface of Kanboard.
# vim /etc/php/7.3/fpm/php.ini
allow_url_fopen = Off

Extracting and starting Kanboard

Configure the permissions for /var/www.
# chown -R root:www-data /var/www

Make a directory for kanboard. Apache will access the directory with www-data permissions but will send PHP connections to PHP-FPM which will run the scripts under the fpm-kanban user.
# mkdir /var/www/kanboard
# chown -R www-data:www-data /var/www/kanboard

In the Apache configuration directory create a new site.
# cd /etc/apache2
# a2dissite 000-default.conf
# touch ./sites-available/001-kanboard.conf
# vim !$

<VirtualHost *:80>
ServerName kanban.topology.aves
ServerAdmin webmaster@localhost
ErrorLog ${APACHE_LOG_DIR}/kanboard_error.log
CustomLog ${APACHE_LOG_DIR}/kanboard_access.log combined
DocumentRoot /var/www/kanboard
<Directory /var/www/kanboard>
Options -Indexes
AllowOverride All
Require all granted
</Directory>

<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php/php7.3-fpm-kanban.sock|fcgi://localhost"
</FilesMatch>

</VirtualHost>

Then we’ll test that the new site is working.

# cp /var/www/html/index.html /var/www/kanboard/
# chown www-data: /var/www/kanboard/index.html
# a2ensite 001-kanboard.conf
# systemctl reload apache2

Add the relevant information to your DNS resolver. I’m using Unbound.  This will redirect all subdomains under hostname.tld to hostname.tld.  Take note of the IP address used.
server:
local-zone: "hostname.tld" redirect
local-data: "hostname.tld 86400 IN A 192.168.1.1"

Visit the site and make sure it’s presented correctly at http://kanboard.hostname.tld

Next, we’ll test that PHP is working as intended by creating a sample PHP file to make sure the configuration is working.

echo '<!--?php phpinfo() ?-->' > /var/www/kanboard/index.php &&

Navigate again http://kanboard.hostname.tld and you should see the PHP debug information.

Next we’ll create a database in Postgresql to store the data. First make sure the database is online.
# systemctl status postgresql
# su -l postgres
# psql
# > create database kanboard;
# > create user kanban with login password 'kpassword';
# > \du # checks if user has been created
# > grant all privileges on database kanboard to kanban;
# > \l # lists the databases

Finally, we’ll extract Kanboard and set up the databases.

Download the latest and extract to the right directory.
# wget https://github.com/kanboard/kanboard/archive/refs/tags/v1.2.19.tar.gz
# tar xzf v1.2.19.tar.gz --strip-components=1 -C ./
# chown -R www-data: ./

Finally, configure database access for Kanboard in the config.php file.
# mv /var/www/kanboard/config_default.php /var/www/kanboard/config.php
# vim /var/www/kanboard/config.php

define('DB_DRIVER', 'postgres');

// Mysql parameters
define('DB_USERNAME', 'kanban');
define('DB_PASSWORD', 'kpassword');
define('DB_HOSTNAME', 'localhost');
define('DB_NAME', 'kanboard');

Now, navigate to http://kanban.hostname.tld and login with the default credentials of admin/admin

I’ll write another post at a later date explaining how I use Kanboard to manage my day-to-days! Enjoy!

Mounting a local directory in a qemu/kvm virtual machine
28 January 2021

I usually try to run my virtual machines using my existing kernel and this means that any modules I use I have to mount them within the virtual machine. I usually use the 9p mount to achieve this but in some scenarios it’s easier to use the virtual vfat mount type in KVM/QEMU.

With the 9p mount, you can use.

qemu_system_x86_64 -fsdev local,path=/directory,security-model=mapped-xattr,id=9p,readonly -device virtio-9p-pci,mount_tag=9p

And mount it locally with

mount -t 9p -o trans=virtio 9p /guest/directory

You can also use the virtual FAT filesystem.

qemu_system_x86_64 -file=fat:ro:/directory,id=vfat,format=raw,if=none -device usb-storage,drive=vfat

Flashing Mobian to the eMMC from within the PinePhone
17 January 2021

Getting Mobian onto the PinePhone is manageable but the given instructions don’t seem to work on my device. I had to apply to the following patch to get the Python installer to flash succesfully.

Log into the PinePhone over SSH (or use the phone itself, you savage :P).

Download the installer from https://salsa.debian.org/Mobian-team/mobian-installer/-/blob/master/mobian_installer.py

Make the following changes before executing:

  • Remove the references to libparted and py and reference the system libraries instead with import parted and import py.
  • In L307, replace logfn = os.path.join(os.environ["USER_PWD"], "mobian_installer.log") with logfn = os.path.join("/home/mobian", "mobian_installer.log")
Searching and executing for only a set of specific file types by extension
16 January 2021

I recently needed to copy a bunch of videos over a slow performing network and hard disk and didn’t want to copy over extraenous files. I used a combination of rsync and find to make sure that only the files I needed were executed and played.

Sending files over
$ rsync -rv ./ --filter "+ */" --filter "+ *.mp4" --filter "+ *.mkv" --filter "- *" user@host:./

Executing (playing) the files
$ find . -type f -name "*.mkv" -print0 -or -name "*.mp4" -print0 | sort -Rz | xargs -0 -n1 omxplayer -r -o hdmi

Quick break down of the commands.

The -print0 && -0 uses GNU extensions to tell both find and xargs to ignore whitespace and only start a new iteration when there’s a null character (‘\0’). The sort command randomises the output and reads line by line using nul termination ('z'). xargs -n1 means to pass the arguments line by line to the following command.

Scheduling a shutdown with systemd

Occasionally, you’ll make a change in Linux that might be a bit precarious – you commit the change with a hesitation anticipation of a problem or uncertainty on the next reboot. In these cases, it may be desirable to schedule a shutdown if nothing is done within a certain time period.

I do this frequently when I’m testing changes to a Raspberry Pi that doesn’t have an off button. This reduces the likelihood that I need to turn off the device by removing power (a problematic shut-off method that can cause problems with the SD card).

If systemd is available, you can create a new timer that executes after a set amount of time.

We’ll create a systemd unit that executes after the multi-user target has completed.

Create a small script to trigger the auto-reboot.

/usr/local/bin/auto-poweroff.sh
#!/bin/bash
/usr/bin/sleep 60
/sbin/poweroff

/etc/systemd/system/auto-poweroff.service
[Unit]
Description="Automatically power off after a period of time."
Type=oneshot
IgnoreOnIsolate=yes
After=ssh.service

[Service]
ExecStart=/usr/local/bin/auto-poweroff.sh

[Install]
WantedBy=multi-user.target

Finally create the symlink into the multi-user.target directory.

cd /etc/systemd/system/multi-user.target.wants/
ln -s ../auto-poweroff.sh

Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater