Jiff Slater
🤔 About
✍️ Contact
28 Jul 2021

Migrating to a static site
12 July 2021

I’ve long authored this blog in WordPress because I’ve found the interface homely and easy to maintain.  However, with the advent of cheaper external devices like Raspberry Pis, I wanted to move to a static site and host the website via CDN at my own IP dynamic address, eschewing the need for using expensive hosting solutions. The end goal for this was to move to a more self-rolled knowledge base website where I could store posts in a filesystem hierarchy rather than in a database.

Here’s how I migrated.

Proof of concept post

Generating a test blog post

As I didn’t want to write pure HTML as I felt it led to a bit of an unstructured format that couldn’t be parsed later or used for heavy cross-linking, I explored some options for authoring the files.


First I considered reStructuredText. This is a markup format that was originally created to write Python docs and later, due to its flexibility, became popular for authoring other types of documents. Here’s what a basic *.rst file looks like.

Welcome to my blog

Here's a *few* pointers to get you started.

- My main blog: `plkt.io <https://plkt.io>`_.
- My preferred search engine: `DuckDuckGo <https://ddg.co>`_.

It can be converted into HTML with pandoc.

$ pandoc –tab-stop 2 -f rst -t html sample.rst

<h1 id="welcome-to-my-blog">Welcome to my blog</h1>
Here's a <em>few</em> pointers to get you started.
<li>My main blog: <a href="https://plkt.io">plkt.io</a>.</li>
<li>My preferred search engine: <a href="https://ddg.co">DuckDuckGo</a>.</li>

A very clear and straightforward way to write blog posts in the terminal but I felt I wasn’t really getting that much more of an advantage from writing directly in the WordPress text editor because there wasn’t any semantic mark up.


Next, I looked up DocBook which is heavily used as an authoring format for writing books and technical documents. As this is the primary content on this blog, it seemed worth exploring. At first, I saw many old examples online of how to author a simple document and was immediately horrified by the referenced to the Document Type Declaration (DTD) that harkens from old HTML+XML days.

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC '-//OASIS//DTD DocBook XML V4.5//EN'
<article lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>

Fortunately, according to the latest DocBook 5.0 standard, this arcane incantation wasn’t required and what we have today is something like this, notice the xmlns portion of the article element. As a side note, I found out that the Linux Kernel documentation has started migrating away from DocBook to using Sphinx + reStructuredText. Read more about this here.

<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>

So, after spending about 30 minutes using the documentation, I managed to re-write the *.rst example above into the following.

<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en"
<title>Welcome to my blog</title>
<title>Welcome to my blog</title>
<para>Here's a <emphasis>few</emphasis> pointers to get you started.
<itemizedlist mark='dash'>
<para>My main blog <link xlink:href="https://plkt.io">plkt.io</link>
<para>My preferred search engine <link xlink:href="https://ddg.co">DuckDuckGo</link>

Quite a bit more verbose and a bit of a pain to type in Vim. Omni-completion C-X,C-O helped a lot in closing tags. I can understand a bit better why reStructuredText, although not as structured as this, reduces the initial hump to get started so will likely result in more up to date documentation.

Selecting a winner

From the explorations above, I finalised the decision on reStructuredText for the mark up format. However, before fully embracing the format, I needed to land on the structure of the site. I had decided that the content itself would serve as the structure rather than living within the structure. Put another way, I decided that one would read my content to determine the taxonomy rather than the content be defined by the taxonomy. This meant that hyperlinking would be very important (and very manual) so I would need to make a stronger effort to keep related documents connected to each other.


I envisioned my static site to enventually converge to something resembling a MediaWiki site.

The structure would be like this:

This seemed like a good first pass for the organisation and something that I could change in the future without too much effort.

Creating the base site

I created the base directory structure and started defining the metadata for the site. As I was using pandoc, this would be split across three files: the template file for the HTML generator, the metadata file to contain variables across the entire site, and the YAML metadata at the top of each *.rst file that contained file specific references.

As Pandoc doesn’t support YAML headers in *.rst files (yet, see here) I’m storing all my YAML metadata in a side file called *.yaml for each post. While it’s not ideal, it’s simple and maintainable in the makefile.


It took me a couple hours to walk through the documentation and build this makefile. However, I feel that as this project becomes more complicated this will pay off. Here’s what I came up with as a basis.

# Makefile for plkt.io


PANDOC := pandoc
FIND := find
PANDOCOPTS := –tab-stop 2 -s –template ./template.txt -f rst -t html -M “lang: en” -M “css: ./style.css” -B header.html -A footer.html

# Note that $(shell <>) replaces newlines with spaces.

dir := ./
src := $(shell $(FIND) $(DIR) -name “*.rst”) # TODO: Do this using make rules.
targets_html := $(src:.rst=.html)

%.html: %.rst
@echo “Compiling” $<
@$(PANDOC) $(PANDOCOPTS) –metadata-file=$(basename $<).yaml $< > $@

all: build

build: $(targets_html)

# Not yet implemented. Supposed to build the site and tar it up for distribution.
dist: clean build
mkdir -p plkt-${VERSION}
for html in $(targets_html); do \
mv $$html plkt-$(VERSION)/; \
tar -cf plkt-${VERSION}.tar plkt-${VERSION}
gzip plkt-${VERSION}.tar
rm -rf plkt-${VERSION}

.PHONY: clean
@for html in $(targets_html); do \
echo “Cleaning” $$html; \
rm -f $$html; \

Headers and footers

Each file would need a header and footer to maintain some visual consistency across the site. To do this, I referenced the header with the -B flag and the footer with the -A flag.

Setting up apache

For the proof of concept, I opted to not use containerisation and instead just move the *.html and *.css files into the /var/www/html directorie.  Viewing the website at http://localhost:80 worked admirably.

Migrating the post history

Exporting the posts from WordPress was a bit tricky. I first tried to use Pandoc’s automatic converting functionality but then I realised I’d have to do it twice: download the HTML, convert to reStructuredText, and then convert it back to HTML.

pandoc -f html -t rst https://plkt.io/2019/11/30/returning-to-wordpress/

I then landed upon a better method. I modified the template for my website by removing the header, footer, and post listing, visited each page individually, and saved them using Firefox. This took about 15 minutes (probably less time that automating it). Then I shoved the posts into a “archived posts” category that I would move bit by bit into the reStructuredText format.


This exercise taught me a lot about the data storage formats. What is the right way to store my post history? — Should it be a multiplicity format that separates the presentation from the data or should each post stand the test of time as its own standalone file? I’m starting to lean towards the latter. I think it’s possible to have the best of both if scoped properly by having an “archive” section of your site. So go ahead and export that page and leave it up for eternity. Normal visitors can view your normal site with the latest formatting but patreons of antiquity can learn more about the cake is made.

This will be the last post written using WordPress. The next post on this site will be generating using pandoc and Vim :).

Configuring Wireguard on the Pinebook Pro in Manjaro Linux
1 February 2020

I recently (Twitter) ordered and received a Pinebook Pro and wanted to share how I got Wireguard working. Wireguard is a VPN that uses modern cryptography while still being easy to configure for various environments.  Unfortunately, even though the kernel module has been merged upstream Manjaro Linux still requires a custom module to be built.  Because the kernel sources aren’t included with the distribution as of now, installing the wireguard-dkms package will fail.  This post shows how I got the userspace wireguard-go program to work in lieu of the kernel module.

Before I continue, if you’re using the default Debian install that came with the device, you should be able to follow this tutorial which uses Cloudflare’s boringtun Rust implementation.  I couldn’t get this tutorial to work so here is an alternative that uses the official Wireguard Go language reference implementation.

Installing the compiler

The Go compiler should be available in all distributions so install it before continuing.  On Manjaro Linux you can do so by typing `sudo pamac install go`.

Cloning the repository

You’ll need to clone to source code from the Wireguard repo: `git clone https://git.zx2c4.com/wireguard-go`.

Building the tool

Once cloning has completed, enter the directory and issue `make`.  After it completes, you should have ./wireguard-go executable in the same directory.

Launching the tool

Open two terminal windows.  In the first, issue sudo LOG_LEVEL=debug ./wireguard-go -f wg0.  This will launch the userspace implementation and create an interface called wg0 which you can see by typing `ip a`.

Configuring and bringing up the Wireguard interface

Bringing up the interface is almost as simple as presented in the docs but because we’re running Manjaro Linux we’ll need to make sure it works well with NetworkManager.  The first step is mark the interface along with any similarly named interfaces as unmanaged.  Create the following file and restart NetworkManager.



# systemctl restart NetworkManager

In a new terminal window, issue the following commands, taking into account your configuration.  Before continuing you’ll also need to have a valid /etc/wireguard/wg0.conf that uses `wg` syntax not wg-quick syntax.  Check the manpage for wg to confirm.  Note that CLIENT_IP_ADDRESS and PEER_IP_ADDRESS_OR_RANGE refers to the address Wireguard interface address space.

# ip address add dev wg0 CLIENT_IP_ADDRESS peer PEER_IP_ADDRESS_OR_RANGE
# wg setconf wg0 /etc/wireguard/wg0.conf
# ip link set mtu 1420 up dev wg0
# ip route add PEER_IP_ADDRESS_OR_RANGE dev wg0

Finally, as per Thaller’s post on the GNOME blogs, if you don’t issue the last command we’ll need to let NetworkManager know about the new route.  List your current connections with nmcli conn show and copy the UUID for your current connection below.  Replace GATEWAY and WIREGUARD_ENDPOINT with the actual IP addressses.

nmcli connection modify UUID +ipv4.routes "WIREGUARD_ENDPOINT/32 GATEWAY"

This should be sufficient to set up the VPN.  You’ll see the handshake initiated and completed in the other terminal window.

Let me know if this worked for you.  DNS resolution is still problematic because NetworkManager doesn’t adjust resolvconf to accomodate the new route.  If you manage to get that working correctly, please let me know on Twitter.

VLAN Primer
21 February 2019

I recently picked up a simple TP-Link switch that supports 802.11q, also known as Virtual LANs.

Here’s a quick primer I wrote to guide myself when configuring my network. All diagrams were created with Graphviz.

Consider the simplest home network possible. You have a combination router/modem that connects your LAN to the WAN. We’ll hide the modem in this diagram as it acts on a lower layer than the router.

graph network {
node [shape=box, style=filled];

a [label="WAN"]
b [label="Router"]
c [label="LAN"]

a -- b -- c


Let’s add some more details. The router gets a mostly static IP address from the ISP and also provides DHCP services to the clients in the LAN.

graph network {
node [shape=box, style=filled];
a [label="WAN"]
b [shape=record, label="{ Firewall | { Router } | Switch } }"]
c [label="LAN"]

a -- b [headlabel=""]
b -- c [taillabel=""]


Now, we’ll flesh out the LAN to show some clients. Dotted lines show wireless clients. TODO FIX GRAPH

graph network {
node [shape=box, style=filled];

a [label="WAN"]
b [shape=record, label="{\nFirewall | { Router } | Switch \n } }"]
c [label="\nPlaystation"]
d [label="\nKindle Fire"]
e [label="\niPhone"]
f [label="\nNintendo Switch"]
g [label="\nPC"]

a -- b
b -- {d, e, f} [style=dotted]
b -- {c, g}

This is a typical home network. Now let’s introduce a VLAN into the network. It operates at layer 2 and creates the appearance of network traffic that is operating on a single network. Because most home modem/routers don’t support VLANs, we introduce another device that sits between the modem and LAN and serves as the router.

In this VLAN, we want wireless traffic to be segmented from wired traffic. Usually this could be easily configured by issuing separate subnets for the wired and wireless clients but we want to have total isolation between the two. Each VLAN has its own broadcast domain.

Here, we introduce a separate wireless switch and tag traffic coming from the switches. In this configuration the router acts as the default gateway for both subnets and can see all VLAN traffic.

graph network {
node [shape=box, style=filled];

a [label="WAN"]
h [label="Modem"]
b [shape=record, label="{\nFirewall | { Router } }"]
c [label="\nPlaystation"]
d [label="\nKindle Fire"]
e [label="\niPhone"]
f [label="\nNintendo Switch"]
g [label="\nPC"]
i [label="Switch (wired)"]
j [label="Switch (wireless)"]

a -- h
h -- b
b -- i [label="\nVLAN 10"]
b -- j [label="\nVLAN 20"]
i -- {c, g}
j -- {d, e, f} [style=dotted]


All images generated with thanks from https://dreampuf.github.io/GraphvizOnline

1 April 2018
I’ve been wanting to play around with some graphics work for a while now and while I’ve used Blender for a few renders, I’ve never sat down and set up a programming environment on my computer. What follows is a short tutorial on how to get started in OpenGL on Windows — but still using the Linux conventions that I’m familiar with.

The demoscene is something that’s fascinated me for years. If you haven’t heard of it, it’s the art of making a computer program (usually size constrained) that produces outstanding visual effects synced with music. There’s a wide variety of target platforms including Windows, Linux, MS-DOS, and even the old Amiga!

I’m surprised to see that there are still regular competitions being held around the world.

Here are some of my favourites:

  • fr-041: debris (YouTube): Very impressive city scape
  • luma – mercury (YouTube): Stunning light effects
  • H – Immersion – Ctrl-Alt-Test (YouTube): Very believable underwater adventure
Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater