Jiff Slater
🤔 About
✍️ Contact
14 Jun 2021

Setting up Kanboard for local project management
13 May 2021

I’ve used Trello quite frequently in the past to manage my work items but wanted to move to a self-hosted version that didn’t data mine my every action or offer to integrate with various other services and companies that distract me from the work.  I came across Kanboard which is an open-source selfhosted kanboard software. In the installation instructions Docker was recommended but I opted to use the default server software that came with Debian.

Here’s how I did it.

Installing and configuring Apache, Postgresql, and PHP

First, we’ll configure the LAPP stack.

We’ll install the services and related PHP extensions.
# apt install apache2 php7.3 php-pgsql libapache2-mod-php
# apt install php-common php-xml php-gd php-mbstring php-json php-common php-zip php-fpm

Check that Apache2 started succesfully.  You should be able to view the normal configuration at http://localhost.
# systemctl status apache2

Next, we’ll enable PHP-FPM to have a separate user for each web site. FPM is the process manager for PHP.

Before we get started we’ll need to create a separate, isolated user for FPM.
# adduser --no-create-home --uid 5000 --disabled-password fpm-kanban

Start by creating a new pool.
# cd /etc/php/7.3/fpm/
# vim pool.d/kanban.conf
; Kanban pool
user = fpm-kanban
group = fpm-kanban
listen.owner = www-data
listen.group = www-data
listen = /run/php/php7.3-fpm-kanban.sock
pm.max_children = 100
pm = ondemand

Enable proxy and FPM support in Apache. Restart Apache afterwards and make sure FPM is running.
# a2enconf php7.3-fpm
# a2enmod proxy
# a2enmod proxy_fcgi
# systemctl restart apache2
# systemctl restart php7.3-fpm

Edit the php.ini configuration and turn off allow_url_fopen to reduce the attack surface of Kanboard.
# vim /etc/php/7.3/fpm/php.ini
allow_url_fopen = Off

Extracting and starting Kanboard

Configure the permissions for /var/www.
# chown -R root:www-data /var/www

Make a directory for kanboard. Apache will access the directory with www-data permissions but will send PHP connections to PHP-FPM which will run the scripts under the fpm-kanban user.
# mkdir /var/www/kanboard
# chown -R www-data:www-data /var/www/kanboard

In the Apache configuration directory create a new site.
# cd /etc/apache2
# a2dissite 000-default.conf
# touch ./sites-available/001-kanboard.conf
# vim !$

<VirtualHost *:80>
ServerName kanban.topology.aves
ServerAdmin webmaster@localhost
ErrorLog ${APACHE_LOG_DIR}/kanboard_error.log
CustomLog ${APACHE_LOG_DIR}/kanboard_access.log combined
DocumentRoot /var/www/kanboard
<Directory /var/www/kanboard>
Options -Indexes
AllowOverride All
Require all granted

<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php/php7.3-fpm-kanban.sock|fcgi://localhost"


Then we’ll test that the new site is working.

# cp /var/www/html/index.html /var/www/kanboard/
# chown www-data: /var/www/kanboard/index.html
# a2ensite 001-kanboard.conf
# systemctl reload apache2

Add the relevant information to your DNS resolver. I’m using Unbound.  This will redirect all subdomains under hostname.tld to hostname.tld.  Take note of the IP address used.
local-zone: "hostname.tld" redirect
local-data: "hostname.tld 86400 IN A"

Visit the site and make sure it’s presented correctly at http://kanboard.hostname.tld

Next, we’ll test that PHP is working as intended by creating a sample PHP file to make sure the configuration is working.

echo '<!--?php phpinfo() ?-->' > /var/www/kanboard/index.php &&

Navigate again http://kanboard.hostname.tld and you should see the PHP debug information.

Next we’ll create a database in Postgresql to store the data. First make sure the database is online.
# systemctl status postgresql
# su -l postgres
# psql
# > create database kanboard;
# > create user kanban with login password 'kpassword';
# > \du # checks if user has been created
# > grant all privileges on database kanboard to kanban;
# > \l # lists the databases

Finally, we’ll extract Kanboard and set up the databases.

Download the latest and extract to the right directory.
# wget https://github.com/kanboard/kanboard/archive/refs/tags/v1.2.19.tar.gz
# tar xzf v1.2.19.tar.gz --strip-components=1 -C ./
# chown -R www-data: ./

Finally, configure database access for Kanboard in the config.php file.
# mv /var/www/kanboard/config_default.php /var/www/kanboard/config.php
# vim /var/www/kanboard/config.php

define('DB_DRIVER', 'postgres');

// Mysql parameters
define('DB_USERNAME', 'kanban');
define('DB_PASSWORD', 'kpassword');
define('DB_HOSTNAME', 'localhost');
define('DB_NAME', 'kanboard');

Now, navigate to http://kanban.hostname.tld and login with the default credentials of admin/admin

I’ll write another post at a later date explaining how I use Kanboard to manage my day-to-days! Enjoy!

Enabling SSH + WiFi on Raspberry Pi OS
15 January 2021

Setting up SSH on Raspberry Pi OS is simple – create an empty ssh file in the boot partition root directory and edit the /etc/wpa_supplicant/wpa_supplicant.conf to include the wireless network information.

# touch /mnt/raspberry_boot/ssh
# NETWORK_NAME=somewirelessssid NETWORK_PASSWORD=somewirelesspassword echo -e 'network={\n ssid="$NETWORK_NAME"\n  psk="$NETWORK_PASSWORD"\n  key_mgmt=WPA-PSK\n}' | tee -a /mnt/raspberry_root/etc/wpa_supplicant/wpa_suppliant.conf
# cat /mnt/raspberry_root/etc/wpa_supplicant/wpa_suppliant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
# eject /mnt/raspberry*

Once the Raspberry Pi boots up, you can access it over the data USB cable if you’ve already configured that and also over the wireless network, presuming it can automatically be assigned an IP address.

Remember to enable ssh to run on each boot with systemctl enable ssh or via raspi-config.

Rotating expired Nitrokey subkeys used for password store
5 July 2020

I’m currently managing my saved passwords with a mixture of pass and Nitrokey. One of my sub-keys expired so I couldn’t update my passwords. Here’s how I generated new keys (rotated them) with a new expiration date.

You’ll need access to your master key. Most tutorials online will make you generate the key locally, nerf it, and upload it into the Nitrokey. In this case, you’ll need find the original primary signing key before moving forward.

Once you have it in hand, extract the contents to a temporary directory and let’s begin. Don’t forget to set the directory permissions appropriately (chmod 700 ./). We’ll be using gpgh to refer to this new directory we’re using for gpg.

$ alias gpgh="gpg --homedir $(pwd)"

$ gpgh --import user@domain.tld.gpg-private-keys
< enter password >

Trust the keys.
$ gpgh --edit-key user@domain.tld
gpg> trust

< select 5 for maximum trust >
< select y to confim >
< exit to confirm >

Modify expiry date of primary key.
$ gpgh --expert --edit-key user@domain.tld
gpg> expire

< select and confirm a new timeframe >

Generate new subkeys.
gpg> list
sec brainpoolP384r1/DEADBEEFDEADBEE1
created: 2019-XX-XX expires: 2021-XX-XX usage: SC
trust: ultimate validity: ultimate
ssb brainpoolP384r1/DEADBEEFDEADBEEA
created: 2019-XX-XX expired: 2020-XX-XX usage: E
ssb brainpoolP384r1/DEADBEEFDEADBEEB
created: 2019-XX-XX expired: 2020-XX-XX usage: S
ssb brainpoolP384r1/DEADBEEFDEADBEEC
created: 2019-XX-XX expired: 2020-XX-XX usage: A
[ultimate] (1). User Name

Generate new subkeys
gpg> addkey
< select 12 to replace subkey BEEA >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 10 to replace subkey BEEB >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 11 to replace subkey BEEC >
< select A to toggle authenticate capability >
< select S to toggle authenticate capability >
< select Q to finish >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >

Remove the expired subkeys
gpg> key 1
gpg> key 2
gpg> key 3
gpg> delkey

Export private and public keys to prepare for backup.
$ gpgh --armor --export-secret-keys user@domain.tld > user@domain.tld-private-keys-2020-07-05
$ gpgh --armor --export user@domain.tld > user@domain.tld-public-keys-2020-07-05

Generate a new revocation certificate
$ gpgh --gen-revoke user@domain.tld > user@domain.tld.gpg-revocation-certificate-2020-07-05

Encrypt the private keys, public keys, and revocation certificate in a symmetrically encrypted tarball and send to offsite.
$ tar cf ./user@domain.tld-keys-2020-07-05.tar user@domain.tld-*-2020-07-05
$ gpgh --symmetric --cipher-algo aes256 user@domain.tld-keys-2020-07-05.tar
$ rm user@domain.tld-keys-2020-07-05.tar
$ sendoffsite user@domain.tld-keys-2020-07-05.tar.gpg
$ sendoffsite user@domain.tld-public-keys-2020-07-05

Import new subkeys into Nitrokey, replacing existing subkeys
< plug in Nitrokey>

$ gpgh --expert --edit-key user@domain.tld
gpg> key 1
gpg> keytocard

< select 2 for encryption key >
< enter master key password >
< enter admin pin for nitrokey >
gpg> key 1
gpg> key 2
gpg> keytocard

< select 1 for signature key >
< enter master key password >
gpg> key 2
gpg> key 3
gpg> keytocard

< select 3 for authentication key >
< enter master key password >
gpg> save

(Note down the encryption key from the “list” output so you can re-initialise pass later.)

Kill the running GPG agents that might interfere with password caching.
$ gpgconf --kill gpg-agent
$ GNUPGHOME=$(pwd) gpgconf --kill gpg-agent

Confirm those sneaky buggers are gone.
$ ps aux | grep gpg

Migrate your pass store to the new set of keys. You’ll need to do this with both the old and new set of keys accessible so we’ll run this from our temporary directory with the expired sub-keys.

First cache the password of the private key.
$ echo "test message string" | gpgh --encrypt --armor --recipient user@domaind.tld -o encrypted.txt
$ gpgh --decrypt --armor encrypted.txt

Confirm you can decrypt an existing pass key.
$ gpgh --decrypt ~/.password-store/some/key/user@domain.tld.gpg

Backup pass directory
$ cp -R ~/.password-store ~/.password-store_bak

Next migrate the passwords, using the encrypted subkey we listed above.

Create and delete a fake password to confirm it’s working.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass generate fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass edit fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass rm fake/password

Finally, update your local GPG configuration by importing the new public keys. Notice we’re using the normal gpg. You should see 3 new subkeys imported.
$ gpg --import user@domain.tld-public-keys-2020-07-05

Now you can remove the temporary directory you made after confirming you’ve backed up the encrypted backup and also published the public keys somewhere accessible.
$ rm -r $(pwd)
$ cd

More articles I’ve written on this topic:
* Using GPG to master your identity (Part 1)
* Configuring and integrating Nitrokey into your workflow

Wireguard + other VPNs managed by NetworkManager
13 February 2020

I’ve been using a mixture of Wireguard and OpenVPN devices in NetworkManager recently.  As mentioned in my prior post, I mark the Wireguard interface as unmanaged and bring up the interface manually with a shell script.  Most *.ovpn files have redirect-gateway def1 which sends all traffic, except the local LAN traffic, over the VPN.  This means that separate LANs you’ve created outside of the VPN will be unaccessible.

There are a couple ways to resolve this depending on how you have the VPN connection configured.  In my case, I have a *.ovpn file that I import into NetworkManager and the routes are included in the file.  It’s common for OpenVPN files to contain a “redirect-gateway def1” clause which causes all network traffic originating from the client to pass through the OpenVPN server.  (Side note: def1 uses and

To resolve this, I added a new line into my *.ovpn file to reference the existing Wireguard LAN I created.


You can also resolve this by adding a new “via” route to the kernel routing table.

Note — if you followed my guide exactly in the prior post then this route will already exist and no change will be needed.

7 June 2019

My plkt.io website is live and built with GatbsyJS and AWS Amplify.  I’m really excited about the possibilities here.  Head on over to read about my plans: plkt.io.

Using Amplify to launch your first Gatsby app
29 May 2019

I’m building the future of this website on plkt.io. I’m using React, Gatsby, and Amplify to build a single page application that’ll be performant, accessible, and modular.
This tutorial will show you how I got started from only a web development background of HTML/CSS/Javascript.

Resources and definitions

React: JavaScript library for building using interfaces using declarative components and reusable components. Created by Facebook.
Gatsby: JavaScript framework for building static web apps. Created by thousands of community members.
Amplify: PaaS that coordinates between backend and frontend across various platforms. Created by Amazon.

Installing software

Get started by installing React followed by Gatsby and Amplify. I’m using Gentoo as my local dev environment so I will use Portage to install React and the npm package manager. If you don’t have git installed you will need that too.
# emerge --ask --verbose net-libs/nodejs dev-vcs/git
# npm install --global gatsby-cli @aws-amplify/cli

Create sample app

Once installed, we’ll use Gatsby to serve a basic page and then enable remote building and deployment using Amplify.
~ $ gatsby new hello-world
~ $ cd hello-world
~/hello-world $ gatsby develop

Now we have a new hello world single page application viewable at localhost:8000. Open the provided address and make sure you can access the web page. If you’re using Visual Studio Code remote development, then forward the port to your local machine.

Deploy to Amazon S3 with Amplify

Next we can configure Amplify as a publishing path. You can use the defaults for most options. Some of the configuration will require you to open the AWS Web Console to create IAM users.

~/hello-world $ amplify configure
~/hello-world $ amplify init

Now we can immediately publish the sample app to an S3 bucket with one command. Run this command and select “DEV (S3 only with HTTP)
~/hello-world $ amplify add hosting

After about 2 – 3 minutes you should see a link to the S3 bucket with your web app deployed.

And that’s it — you now have used the modern JavaScript, API, Mobile stack to deploy a web app. I highly recommend you install VS Code and walk yourself through the tutorials available on the Gatsby and Amplify homepages.
Gatsby tutorial: https://www.gatsbyjs.org/tutorial/part-one/
Amplify tutorial: https://aws-amplify.github.io/docs/js/react

Immediate broken pipe when connecting via SSH on Gentoo under VMWare Fusion

Recently when cloning repositories through GitHub I was facing immediate broken pipe issues.

bnt@gentoo ~ $ ssh git@github.com
packet_write_wait: Connection to port 22: Broken pipe

The fix is to change the IP Quality of Service setting to “throughput”.

Host *

This is a known issue and being tracked by a bug on the open-vm-tools GitHub repository [link]. I’m running VMWare Fusion 11.0.3 and open-vm-tools 10.3.10 on Gentoo with kernel 4.19.27 and ssh version 7.9_p1-r4.

Debugging ebuild failure on Gentoo for lxml
22 April 2019

While setting up a fresh installation of Gentoo to deploy to the cloud for remote dev work, I kept running into compilation errors with lxml.

The referenced snippet of code shown was the standard compilation line: “${@}” || die “${die_args[@]}” which means run the command and associated arguments or quit and show the associated arguments.

I went into the build directory to debug further.

# cd /var/tmp/portage/dev-python/lxml-4.3.3/lxml-4.3.3-python2_7/work/lxml-4.3.3-python2_7
# i686-pc-linux-gnu-gcc -O2 -march=native -pipe -fno-strict-aliasing -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -I/usr/include/libxml2 -Isrc -Isrc/lxml/includes -I/usr/include/python2.7 -c src/lxml/etree.c -o /var/tmp/portage/dev-python/lxml-4.3.3/work/lxlm-4.3.3-python2_7/build/temp.linux-x86_64-2.7/src/lxml/etree.o -w

Which compiled without problems.

I emerged screen and then ran emerge –resume and managed to capture the error.

> Unknown psuedo op .str

I found a similar thread on the Gentoo forums and created a bigger swapfile as I was running out of memory.
# fallocate /swapfile -l 1GiB
# chmod 600 !$
# mkswap !$
# swapon !$

This solved the compilation issue.

Git up and going
17 April 2019

Here’s a quick primer I wrote for myself to get git configured on a new computer. All of the options alter your global settings.

Set yourself as the global author

$ git config --global user.name "Antony Jepson"
$ git config --global user.email email@domain.tld

Set your default editor

$ git config --global core.editor vim

Set some abbreviations

$ git config --global alias.st status
$ git config --global alias.ci commit
$ git config --global alias.cim commit -m
$ git config --global alias.br branch
$ git config --global alias.pu push
$ git config --global alias.pl pull

Moving a user cron job to a systemd timer
23 March 2019

I have been using cron for well over a decade to schedule tasks to run when I’m not logged in. However, maintaining logs for these has been difficult and, after using systemd for a while, think I should switch to using a timer.

First, take a look at your current cron jobs.

$ crontab -l
0 * * * * offlineimap

This syncs my email every hour on the hour. For an equivalent systemd implementation we need to write a timer and a service. The timer will call the service at the scheduled time.

# $HOME/.config/systemd/user/offlineimap.service
Description=Local offlineimap service



# $HOME/.config/systemd/user/offlinemap.service.d/01-env.conf
# There is no shell expansion here.
# Don't surround the values with quotes.


# $HOME/.config/systemd/user/offlineimap.timer
Description=Sync mail using IMAP every hour



Now we can load these new units with “systemctl –user daemon-reload” and start them with “systemctl –user enable offlineimap.timer”.

Further reading: UNIX and Linux System Administration Handbook (5th Edition) [Amazon]

Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater