Jiff Slater
🤔 About
✍️ Contact
28 Jul 2021

Best btrfs + luks configuration for longevity with new hard drives
29 June 2021

I’m a big fan of btrfs as the filesystem of choice not only for backups but also for the root filesystem.  However, as I move around with my backups sometimes I want to restore them to different machines and I always hesitate when faced with the prospect of formatting, encrypting, and partitioning, and mounting a new drive.  I wanted to document how I do it so I can have some consistency as I move from hard drive to hard drive, SSD to SSD.


In general, when I purchase hard drives I purchased two of the same kind.  I don’t usually like to go smaller than 2TB and recently I’ve been harbouring a certain liking for 4TB hard drives when I can get my hands on them.  I’m a big fan of Western Digital and also of Seagate, especially at these larger ranges.  You can view the WD Red series prices here (UK).

BackBlaze has a great website where they list the failure rates of their hard drives (link) so you can start here to prime your instincts.


This will take a while.  Just like onboarding new RAM with memtestx86, I’ll test my hard drives for badblocks with, funnily enough, the badblocks tool.  Arch Linux has a great wiki article about it here.  Here are my default steps for new drives.

Run the smartctl self-tests.

smartctl -t short /dev/sda

smartctl -t conveyance /dev/sda # Used to check that the drive wasn’t damaged in transportation

And view the results

smartctl -A /dev/sda

Next, run the badblocks command.  You can speed this up by figuring out the block size of your hard drive.

hdparm -I /dev/sda | grep Sector # Get the block size, usually around 4096

badblocks -b 4096 -ws -o badblocks-dev-sda.txt /dev/sda

From this mailing list post (link), it seems that modern drives will automatically notice a bad block and re-allocate to another location.  So, before cursing the wind if you get a bad block, it’s probably valuable to run it again and see if the error heals itself.

At the expense of write speed, you can use hdparm to turn on the Write-Read-Verify support.  This should also result in a lower likelihood of being bitten by writing corrupted data.

Finally, we can use the wipefs tool to make sure there’s no filesystem signatures remaining.

wipefs -a /dev/sda

As shown in the encrypting section, if you really want to wipe the drive you can open the drive in raw dm-crypt mode and wipe it.

cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks


Usually, there’s no need to do anything fancy here.  If the drive is a root drive, I add an EFI partition, a swap partition, a root partition, and leave a small amount of space at the end of the hard drive unpartitioned.  The encryption step will be on the root partition (typically /dev/mapper/sda_luks3).  If it’s a data drive, I typically encrypt the entire drive and then make a single partition to use for the data.

parted /dev/sda

mklabel gpt

mkpart "EFI system partition" fat32 1MiB 301MiB

set 1 esp on

mkpart "EFI system partition" btrfs 301MiB -40960MiB

mkpart "Swap space" linux-swap -40959MiB -8192MiB


Depending on if the drive is a root drive or a data drive, the order of the this and the partitioning step can be swapped – especially if you need an EFI partition.

I’ve been using LUKS (Linux Unified Key Setup) for over a decade and I’ve found it rock-solid for hard drive encryption.  My settings of choice have evolved over the years and today I use a very modern set of options.  Cryptsetup, the administration interface to dmcrypt and LUKS, has a lot of options that let you really (shoot yourself in the foot) customise the configuration.  In general, for data drives I use keyfiles instead of passphrases and store an encrypted copy of the keyfile in a different building (virtual or otherwise) than the one in which the drive lives.  For root drives, I use a passphrase and store a copy of the LUKS header in a different building.

More recently, the dm-integrity (GitLab; Kernel; presentation) component enables even more assurance that your data is not silently becoming corrupted and I’ve started to enable that as well.  It’s noted as experimental so I use a very minor configuration that hasn’t (touch wood) caused problems to date.

One quick thing to note before diving into encryption – it’s important to establish your threat model when using encryption – for me, I’m simply trying to avoid any hiccups that could happen in the event that a drive is stolen.

First, I encrypt the drive using luksFormat.  I add a few custom commands for my particular risk tolerance of encryption strength.  If speed is a concern, you can first use cryptsetup benchmark and see which algorithms are most performant for your set up.  On my machine, argon2id + XTS was the best for reducing number of iterations on a key (if someone is trying to brute force your key) and for performance when reading and writing encrypted data.

Before doing the below, if you’re really paranoid, as per the cryptsetup FAQ (GitLab) you can over-write the hard drive with random data before getting started.

cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks

dd if=/dev/zero of=/dev/sda_luks oflag=direct status=progress

cryptsetup close sda_luks

cryptsetup --key-size 512 --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda

If it’s a data drive, I’ll generate a key-file and use that.

dd if=/dev/random of=./sda-keyfile bs=512

cryptsetup --key-size 512 --key-file ./sda-keyfile --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda

Next, I immediately back up the header and store it somewhere safe.

cryptsetup luksHeaderBackup --header-backup-file /root/sda_header.luksHeader /dev/sda

Now I can open the device and begin the partitioning and setting up the filesystem.

cryptsetup open --type luks2 /dev/sda sda_luks

Creating the filesystem

The final stage of this abstraction sandwich is to configure btrfs.  This is my favourite part: 1) because we’re almost done; and 2) because it’s the abstraction level I deal with the most.

A unique component of btrfs is that is presents a subvolume filesystem on top of the filesystem you see at the command line.  I usually don’t build the filesystem structure where the root filesystem is at top of the hierarchy, instead I create a tree like so:

/toplevel # the parent of all subvolumes

/toplevel/savestate # this is where snapshots are stored

/toplevel/rootfs # where the rootfs will go

/toplevel/data # where data is stored, in the case of a data drive

One thing I’ve learned from using btrfs is that if you don’t configure quotas, you will rarely know how much space you’ll save by deleting a given snapshots.  So when I usually enable that option when configuring the initial structure.

mkfs.btrfs --csum xxhash /dev/mapper/sda_luks
mkdir /toplevel
mount -o autodefrag=on,compress=lzo /dev/mapper/sda_luks /toplevel
btrfs quota enable /toplevel
btrfs subvolume create /toplevel/rootfs
btrfs subvolume create /toplevel/rootfs/gentoo
btrfs subvolume create /toplevel/savestate
btrfs subvolume create /toplevel/storage


And there you have it, a very simple but practical way of managing the longevity of your system with a systematic approach to formatting, partitioning, encrypting, and creating filesystems on your hard drives.

Quickly instantiating an Arch Linux systemd-nspawn container from Gentoo
6 December 2020

I recently needed to install the beets media organiser on Gentoo and found it needed a lot of packages to be unmasked. Rather than install it directly from pip and opted to install it inside a small Arch Linux container. I consider this a trial run before I move my Firefox installation from the VM I’m authoring this post to a container on the host.

Note that this method of working means you’ll have duplicate packages on your host system but, as they say, space is cheap, right?

Step 1: Acquiring the packages

Go ahead and download the bootstrap image from one of your favourite mirrors. The file name as of writing is ‘archlinux-bootstrap-2020.12.01-x86_64.tar.gz’. Note that I use the xattrs command line option for wget so I can remember from where I got the file. You can view the extended attributes by using `getfattr -d $FILENAME`.

$ wget --xattrs https://$HOST/archlinux/iso/2020.12.01/archlinux-bootstrap-2020.12.01-x86_64.tar.gz

Step 2: Creating the container directories

I always find this a bit of a chicken and egg problem – how do you name the directory in a way that reflects what you’re going to use it for. I’ve general stuck with writing the date of the initial creation as the directory name.

$ mkdir $HOME/containers/archlinux/2020-12-06
$ cd !$

Step 3: Launching the container and copying in the bootstrap directory.

Note that you need to run this as root.

# tar xzpf archlinux-bootstrap-2020.12.01-x86_64.tar.gz --strip-components=1

Step 4: Selecting a nearby mirror

Edit the mirrorlist file and uncomment the appropriate mirror.

# vim etc/pacman.d/mirrorlist

Step 5: Launching the container

This is where the magic of systemd-nspawn comes in. You can simply run the following (as root unfortunately)…

# systemd-nspawn -D ~user/containers/archlinux/2020-12-06
Connecting a physical NIC to a Qemu Linux Guest
12 November 2020

Enabling IOMMU to directly connect a physical card to a guest used to be a painful and error prone task. I remember in the past having to play with the Access Control Services to get anything to work. Now most of the Intel and AMD devices support it – at least partially.

I recently started building my virtual homelab and needed to have a network accelerated guest to handle my VM traffic. This guest would be running on the host along with the other VMs but would have direct access to the second network card in my PC. This way – virtual traffic is routed out of one access card and the host’s traffic is routed out of a different card. Physical separation – at least in theory. In another post I’ll discuss the VLAN I configured at the physical router’s end to enable physical separation between the two network cards.

Before continuing, I recommend you read the Errata for your CPU (see mine: Intel Xeon E3-1200) to see if there’s any known (and most likely unfixable) issues that will impact the use of physical devices instead virtual guests. I checked my IOMMU status using lspci -vv and looked for the ACSCap line under my PCI Express Root.

Using this handy guide on InstallGentoo I ran the following to list my IOMMU groups which I would then be isolating and handing off to the VM.

$ for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d); \ 
do echo "IOMMU group $(basename "$iommu_group")"; for device in $(ls -1 "$iommu_group"/devices/); \
do echo -n $'\t'; lspci -nns "$device"; done; done

Here my network card was located under IOMMU group 18 (and my graphics card under IOMMU group 1 but that’s for another day :)).

There were a few options for the next step – either I could have written a small systemd service that runs before the network is up or I could add an entry to my modprobe config in /etc to bind the dummy driver to the interface – I opted for the latter, using the device ID from the above scriptlet.

IOMMU group 18
05:00.0 Ethernet controller [0200]: Intel Corporation I2210 Gigabit Network Connection [XXXX:YYYY] (rev ZZ)
# echo "options vfio-pci ids=XXXX:YYYY" | tee -a /etc/modprobe.d/vfio.conf

You can either modprobe vfio-pci to have this happen (you should see the kernel driver in use be vfio-pci when using lspci -v) or unbind using /sys.

First list the device directory in /sys/bus/pci/devices/<domain:bus:device:fun>, checking for the driver directory.

# ls /sys/bus/pci/devices/0000\:05\:00.0/ 
[...] driver [...]

Then issue an unbind command to the driver.

# echo XXXX:YYYY > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind

Now your Ethernet controller is ready to be handed to the VM.  Simply add -device vfio-pci,host=05:00.0 to your QEMU command line.

More information

Installing Gentoo in 2020
27 September 2020

I had my once a year conundrum recently – install Gentoo or Ubuntu. This year I chose Gentoo and was immediately awash with nostalgia from seeing this screen.

The last time I ran Gentoo seriously was back in 2008 and now to see this in all its glory really brings me back.

In memory of those good times, I wanted to document the installation process on a modern 2020 PC.  Overall, I would say not much has changed except the availability of systemd as a init option.

Initial setup

I began by downloading the Gentoo minimal install CD. For AMD64 it’s available here.  As with all Gentoo installations it’s wise to have the handbook nearby for support.  It’s available here for AMD64 here.

I burned the CD image to a USB stick using Etcher by Balena.  It’s not my favourite tool but it’s reliably despite the ads.  dd is also another alternative.

Next, I rebooted in the the USB stick via the UEFI boot menu and chose the default Grub menu.  If you have a wired network configured then in most cases the network will already be working and configured with DHCP.

Configuring disks

The hardest part about configuring a fresh installation of Linux is deciding on the storing configuration.  The usual deciding factors are tolerance for risk and available storage devices.  In my case, I have a SSD solely dedicated to Linux and a pair of HDDs for longer term storage.

livecd ~ # lsblk
loop0 7:0 0 390.1M 1 loop /mnt/livecd
sda 8:0 0 1.8T 0 disk 
└─md156 9:126 0 3.8T 0 raid0 
├─md156p1 259:0 0 128M 0 part 
└─md156p2 259:1 0 3.8T 0 part 
sdb 8:16 0 1.8T 0 disk 
└─md156 9:126 0 3.8T 0 raid0 
├─md156p1 259:0 0 128M 0 part 
└─md156p2 259:1 0 3.8T 0 part 
sdc 8:32 0 460.8G 0 disk 
├─sdc1 8:33 0 512M 0 part 
├─sdc2 8:34 0 244M 0 part 
└─sdc3 8:35 0 460G 0 part 
sde 8:64 1 14.6G 0 disk 
├─sde1 8:65 1 427M 0 part /mnt/cdrom
└─sde2 8:66 1 6.4M 0 part 
sr0 11:0 1 1024M 0 rom

Here you can see I have two 2TB drives already in RAID from an existing Linux installation, the USB stick mounted at /mnt/cdrom and the rootfs for the livecd mounted at /mnt/livecd.

For the filesystem, I usually select btrfs for the SSD and btrfs in RAID1 for the HDDs.  This gives me the best balance of recoverability, reliability, and performance for my usecases.  However, because I’m curious to try some of the latest technology this time round I’m going btrfs across all my drives.

Here’s the storage configuration:

  • SSD: btrfs on LUKS with GPT
    1. EFI partition, 512M
    2. boot partition, 512M
    3. swap partition, 32G
    4. root partition, remaining space
  • HDD: btrfs on LUKS (hdd1 & hdd2) with GPT
    • EFI partition, 512M, empty
    • root partition, remaining disk

Configuring the SSD

First before partitioning I usually erase the SSD.  I have more than normal faith in the ATA secure erase or NVMe secure erase and commonly use it before partitioning drives.  hdparm -I /dev/<drive> quickly tells me what my options are.   In case I’m told my drive is frozen, a quick hotplug (while the computer is on) will unfreeze the device so you can send the secure erase command.  I also had to make sure that the BIOS had the drives connected using AHCI.

Next I set up the partition table.  MBR with DOS disklabel is deprecated (and has been for a while) so I’ll use GPT.  I want the disk to be bootable in any case so I usually use a /boot partition that’s unencrypted, along with EFI and rootfs partitions.  This setup can be changed later to a completely encrypted drive with boot USB.

After the partition table was created, it looked something like this:

livecd ~ # fdisk -l /dev/sdc
Disk /dev/sdc: WW GiB, XX bytes, YY sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: ZZ-ZZ-ZZ-ZZ-ZZ

Device Start End Sectors Size Type
/dev/sdc1 2048 1050623 1048576 512M EFI System
/dev/sdc2 1050624 1550335 499712 244M Linux filesystem
/dev/sdc3 X Y Z 450G Linux filesystem
/dev/sdb4 X Y Z 32G Swap

mkfs.fat -F32 /dev/sdc1 (for the EFI system partition)

mkfs.ext2 /dev/scdc2 (for the boot partition)

Next I’ll turn the last partition into a LUKS container by using the cryptsetup command.  Read more about LUKS here.

First the performance of the disk is evaluated by using cryptsetup benchmark.  From the results and my risk profile, I can choose the configuration that makes the most sense for you.

cryptsetup --key-size 512 --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id luksFormat /dev/sdc3
cryptsetup luksOpen /dev/sdc3 xtsroot

Next, I’ll set up btrfs on the encrypted root filesystem.

mkfs.btrfs -L broot /dev/mapper/xtsroot

I created a btrfs block device with label broot.  Next, I mounted it with light compression turned on along with disk quotas.  I made sure to have the rootfs and storage directories beneath the toplevel directory for ease of snapshots.

mkdir /toplevel
mount -o compress=lzo /dev/mapper/xtsroot /toplevel
# btrfs quota enable /toplevel # we will not be enabling this yet.
btrfs subvolume create /toplevel/rootfs
btrfs subvolume create /toplevel/rootfs/gentoo
btrfs subvolume create /toplevel/savestate
btrfs subvolume create /toplevel/storage
btrfs subvolume create /toplevel/storage/home
btrfs subvolume create /toplevel/storage/home/user
mount -o subvol=/rootfs/gentoo,compress=lzo /dev/mapper/xtsroot /mnt/gentoo

Finally, I turned on swap from the filesystem I made earlier.

mkswap /dev/sdc4
swapon /dev/sdc4

Installing Gentoo

Before continuing I made sure the date is correct – date.  In most cases, it will already be set correctly.  Next, I downloaded the multilib tarball from the Gentoo’s website.

wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/20200920T214503Z/stage3-amd64-systemd-20200920T214503Z.tar.xz \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.CONTENTS.gz \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.DIGESTS \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.DIGESTS.asc
tar xpvf stage3-amd64-systemd-20200920T214503Z.tar.xz --xattrs-include='*.*' --numeric-owner
nano -w /mnt/gentoo/etc/portage/make.conf

For my compilation options, I chose the following: MAKEOPTS=”-j2″; EMERGE_DEFAULT_OPTS=”–jobs=3″; and COMMON_FLAGS=”-Ofast -flto -pipe -march=native -funroll-loops”.

Next, I continued following the guide by selecting mirrors and the ebuild repository.  Finally, I chrooted into the new directory as per the Gentoo handbook.

For my profile, I opted to use default/linux/amd64/17.1/systemd. Next I set my default Python interpreter to python 3.7 by adding the lines below to /etc/portage/package.use

*/* PYTHON_TARGETS: python3_7
*/* PYTHON_SINGLE_TARGET: -* python3_7

# For systemd, I added some specific use flags +cryptsetup +homed +pkcs11 +policykit and for python I added sqlite as a dependency.

After this I issued “emerge –ask –verbose –update –deep –newuse @world“.  I commonly check the default USE flags for the profile before going further – it’s stored in /var/db/repos/gentoo/profiles/* and more information here.

Furthermore, when I need to check the current Portage configuration it’s just a “emerge –info“ away.  The Gentoo wiki page also serves as a helpful reference.

For the kernel, I followed the handbook and made sure I had the following functionality:


I also added the following to my bootflags kernel command line:

  • resume=/dev/root

Next, for the initramfs I opted to use genkernel and I used the following configuration.

genkernel --lvm --mdadm --bcache  --btrfs --e2fsprogs --dmraid --bootloader=grub2 --luks --busybox --install --kernel-config=/usr/src/linux/.config initramfs

For configuring systemd and grub2 you’ll need to add the following to /etc/default/grub

GRUB_CMDLINE_LINUX="init=/lib/systemd/systemd crypt_root=<device> dobtrfs"

Remember to specify the UUID of the real_root in GRUB_DEVICE and to uncomment GRUB_TERMINAL=console if you want to see errors during bootup.

I usually compile two kernels – one custom configured by me and the other by genkernel in case I get stuck.  Also in the modern day and age, there’s no region to configure any framebuffers, so at best I only set the EFI framebuffer and simple framebuffer kernel option.

genkernel --kernel-append-localversion=-custom all
grub-mkconfig -o /boot/grub/grub.cfg

Once finished, I rebooted, selected the new entry in Grub, entered my password for my encrypted root drive, and immediately started emerging Firefox.

Welcome home!

Debugging ebuild failure on Gentoo for lxml
22 April 2019

While setting up a fresh installation of Gentoo to deploy to the cloud for remote dev work, I kept running into compilation errors with lxml.

The referenced snippet of code shown was the standard compilation line: “${@}” || die “${die_args[@]}” which means run the command and associated arguments or quit and show the associated arguments.

I went into the build directory to debug further.

# cd /var/tmp/portage/dev-python/lxml-4.3.3/lxml-4.3.3-python2_7/work/lxml-4.3.3-python2_7
# i686-pc-linux-gnu-gcc -O2 -march=native -pipe -fno-strict-aliasing -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -I/usr/include/libxml2 -Isrc -Isrc/lxml/includes -I/usr/include/python2.7 -c src/lxml/etree.c -o /var/tmp/portage/dev-python/lxml-4.3.3/work/lxlm-4.3.3-python2_7/build/temp.linux-x86_64-2.7/src/lxml/etree.o -w

Which compiled without problems.

I emerged screen and then ran emerge –resume and managed to capture the error.

> Unknown psuedo op .str

I found a similar thread on the Gentoo forums and created a bigger swapfile as I was running out of memory.
# fallocate /swapfile -l 1GiB
# chmod 600 !$
# mkswap !$
# swapon !$

This solved the compilation issue.

Installing Windows XP in VirtualBox
18 July 2009

Not too difficult, here are a few tips to make sure that your experience is flawless.

When configuring the virtual machine, make sure that you

  • enable APIC and ACPI in Settings –> System.
  • create a fixed sized harddisk to improve disk I/O speeds.

After installing Windows XP, make sure that you

  • download the driver for the Intel Pro 1000 network adapater and change the default adapter from PCNet to Intel Pro 1000 in Settings –> Network. For me, this improved network access speed and reduced DNS lookup delays.
  • install VirtualBox Guest Editions which enables seamless mouse integration and experimental Direct3D support by mounting the Guest Editions ISO.
  • if, in the event that Windows Update fails to work, register the Wups2.dll as outlined in the Microsoft Knowledge Base <http://support.microsoft.com/kb/943144>

I extend my thanks to the VirtualBox team for making this so effortless. By the way, anything between 5G – 10G is a good size for your virtual harddisc.

Squashing the PITA!
29 April 2009

Over the last couple of months, I have been nagging myself to fix several of the issues with my Linux installation, in particular I need to:

  • update my kernel (currently 2.6.27-rc9)
  • get the wacom input device working with the newest xorg-server
  • add some snaz to my desktop configuration (openbox, dmenu…bleh)

So, sparing you the `wget`ing and `tar -xf`ing, I upgraded my kernel to 2.6.29-rc2 and enabled the kernel modesetting. This enables me to switch to the console with no delay, instead of the current 2-3 second delay. Good news: the upgrade worked well.

Additionally, I upgraded my xorg-server to 1.6.1: the only problem was wacom tablet didn’t work, so I installed the development version of linuxwacom (version 8.3-3).

Everything works well so far except rendering terminals. When I try to open alsamixer with uxterm or resize it, the terminal behaves as if it has a really low refresh rate. I am looking into the issue and I will post a fix if I find one.

As for the ‘added snaz,’ I’ll deal with that another time.  Right now I’m just happy that my fps in stepmania has increased by about 20!

8 December 2008

The latest Xorg video driver for Intel chipsets will probably cause a slowdown in rendering due to a switch from TTM to GEM. GEM is only supported in kernel 2.6.28 and above so it’s recommended that you stick with drivers released before 2.4.3.

More information: http://bugs.freedesktop.org/show_bug.cgi?id=13922


Compiling using the vanilla kernel tree sources
15 November 2008

I’m simply going to list the steps.

wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-
tar xf linux-
cd linux-
zcat /proc/config.gz > .config
make menuconfig
make modules_install
mkinitcpio -k `make kernelrelease` -g /boot/kernel-`make kernelrelease`.img
cp System.map /boot/System.map-`make kernelrelease`
cp arch/i386/boot/bzImage /boot/vmlinuz-`make kernelrelease`
install -D -m644 .config /boot/kconfig-`make kernelrelease`
/sbin/depmod -A -v `make kernelrelease`
vi /boot/grub/menu.lst
Upgrading to linux-2.6.27-rc3
19 August 2008

rc3 is another solid release.  Follow the same procedure documented in upgrading to linux-2.6.27-rc2.

The catalyst patch works with this release.

Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater