logo
Antony Jepson
🤔 About
✍️ Contact
22 Sep 2020
 

General knowledge and concepts for high-level discussions
22 August 2020

When discussing complicated topics it can be helpful to have a unified pool of knowledge.  Below is a list of things I try to keep in my conceptual model of the world so I can have fruitful discussions with my peers.

Mathematics

Sciences

Notes from setting up a disconnected Debian Testing server

I recently set up a new server at-home server that isn’t connected to the Internet. It’s only available on the local network and doesn’t have any connection to the outside world. It’s part of my longer term initiative to have a disconnected household.

Here’s some notes I took while setting this up.

Why?

I have servers scattered across various providers and felt a general anxiety about security. While I do feel confident using online storage solutions like AWS S3 Glacier class + GPG encryption, I feel that instance-based compute services would likely cause me a headache in the future. Storing everything locally would be faster and reduce costs as well.

Updates

After a fresh installation of Debian Testing, I briefly plugged in the Ethernet cable, set my sources.list to point to ftp.uk.debian.org, and downloaded the latest packages and security updates including vim, sudo, and screen.

For future updates, I download the weekly testing DVD image from Debian’s servers and transfer them via SSH. I’m still working on optimising this.

Each weekly testing DVD is archived in storage.

Screen suspend

As this is a laptop, the screen didn’t suspend automatically when it wasn’t in use. I appended consoleblank=60 to my GRUB_CMDLINE_LINUX_DEFAULT entry.

Containers

I successfully migrated my containers from Docker to qemu and systemd-container based deployments. I’ll detail more about this in the future. For each deployment I check that it deploys without problems in another physical machine.

  • GitLab
  • OwnCloud
  • MediaWiki
Rotating expired Nitrokey subkeys used for password store
5 July 2020

I’m currently managing my saved passwords with a mixture of pass and Nitrokey. One of my sub-keys expired so I couldn’t update my passwords. Here’s how I generated new keys (rotated them) with a new expiration date.

You’ll need access to your master key. Most tutorials online will make you generate the key locally, nerf it, and upload it into the Nitrokey. In this case, you’ll need find the original primary signing key before moving forward.

Once you have it in hand, extract the contents to a temporary directory and let’s begin. Don’t forget to set the directory permissions appropriately (chmod 700 ./). We’ll be using gpgh to refer to this new directory we’re using for gpg.

$ alias gpgh="gpg --homedir $(pwd)"

$ gpgh --import user@domain.tld.gpg-private-keys
< enter password >

Trust the keys.
$ gpgh --edit-key user@domain.tld
gpg> trust

< select 5 for maximum trust >
< select y to confim >
< exit to confirm >

Modify expiry date of primary key.
$ gpgh --expert --edit-key user@domain.tld
gpg> expire

< select and confirm a new timeframe >

Generate new subkeys.
gpg> list
sec brainpoolP384r1/DEADBEEFDEADBEE1
created: 2019-XX-XX expires: 2021-XX-XX usage: SC
trust: ultimate validity: ultimate
ssb brainpoolP384r1/DEADBEEFDEADBEEA
created: 2019-XX-XX expired: 2020-XX-XX usage: E
ssb brainpoolP384r1/DEADBEEFDEADBEEB
created: 2019-XX-XX expired: 2020-XX-XX usage: S
ssb brainpoolP384r1/DEADBEEFDEADBEEC
created: 2019-XX-XX expired: 2020-XX-XX usage: A
[ultimate] (1). User Name

Generate new subkeys
gpg> addkey
< select 12 to replace subkey BEEA >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 10 to replace subkey BEEB >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 11 to replace subkey BEEC >
< select A to toggle authenticate capability >
< select S to toggle authenticate capability >
< select Q to finish >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >

Remove the expired subkeys
gpg> key 1
gpg> key 2
gpg> key 3
gpg> delkey

Export private and public keys to prepare for backup.
$ gpgh --armor --export-secret-keys user@domain.tld > user@domain.tld-private-keys-2020-07-05
$ gpgh --armor --export user@domain.tld > user@domain.tld-public-keys-2020-07-05

Generate a new revocation certificate
$ gpgh --gen-revoke user@domain.tld > user@domain.tld.gpg-revocation-certificate-2020-07-05

Encrypt the private keys, public keys, and revocation certificate in a symmetrically encrypted tarball and send to offsite.
$ tar cf ./user@domain.tld-keys-2020-07-05.tar user@domain.tld-*-2020-07-05
$ gpgh --symmetric --cipher-algo aes256 user@domain.tld-keys-2020-07-05.tar
$ rm user@domain.tld-keys-2020-07-05.tar
$ sendoffsite user@domain.tld-keys-2020-07-05.tar.gpg
$ sendoffsite user@domain.tld-public-keys-2020-07-05

Import new subkeys into Nitrokey, replacing existing subkeys
< plug in Nitrokey>

$ gpgh --expert --edit-key user@domain.tld
gpg> key 1
gpg> keytocard

< select 2 for encryption key >
< enter master key password >
< enter admin pin for nitrokey >
gpg> key 1
gpg> key 2
gpg> keytocard

< select 1 for signature key >
< enter master key password >
gpg> key 2
gpg> key 3
gpg> keytocard

< select 3 for authentication key >
< enter master key password >
gpg> save

(Note down the encryption key from the “list” output so you can re-initialise pass later.)

Kill the running GPG agents that might interfere with password caching.
$ gpgconf --kill gpg-agent
$ GNUPGHOME=$(pwd) gpgconf --kill gpg-agent

Confirm those sneaky buggers are gone.
$ ps aux | grep gpg

Migrate your pass store to the new set of keys. You’ll need to do this with both the old and new set of keys accessible so we’ll run this from our temporary directory with the expired sub-keys.

First cache the password of the private key.
$ echo "test message string" | gpgh --encrypt --armor --recipient user@domaind.tld -o encrypted.txt
$ gpgh --decrypt --armor encrypted.txt

Confirm you can decrypt an existing pass key.
$ gpgh --decrypt ~/.password-store/some/key/user@domain.tld.gpg

Backup pass directory
$ cp -R ~/.password-store ~/.password-store_bak

Next migrate the passwords, using the encrypted subkey we listed above.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass init DEADBEEFDEADBEEFD

Create and delete a fake password to confirm it’s working.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass generate fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass edit fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass rm fake/password

Finally, update your local GPG configuration by importing the new public keys. Notice we’re using the normal gpg. You should see 3 new subkeys imported.
$ gpg --import user@domain.tld-public-keys-2020-07-05

Now you can remove the temporary directory you made after confirming you’ve backed up the encrypted backup and also published the public keys somewhere accessible.
$ rm -r $(pwd)
$ cd

More articles I’ve written on this topic:
* Using GPG to master your identity (Part 1)
* Configuring and integrating Nitrokey into your workflow

Alarms as timers
13 May 2020

07:59.

It was an unusually overcast morning and the birds seemed to be reluctantly chirping, upset at the sun apparently missing its promised sun rise time.  While I was running, my alarm went off.  I realised that I was way ahead of my usual self – I had prepped breakfast, almost finished my run, and had solved a couple long standing issues at work; all while watching the clouds mope across the sky.

This made me realise – having the alarm wake you up at a certain time is all wrong.  Rather, it should be viewed as a timer by which you are ready to attack your day.  Let’s be honest to ourselves, who jolts out of bed with their hair already combed, coffee half drank, and laptop furiously churning through the results streaming in from that advanced SQL query you wrote 10 minutes ago?

Next time, I’ll try to have finished my run and already showered and dressed before the alarm goes off.

 

My principles of product management
4 March 2020

[Draft post]

Principles of product managemnet

I’ve been in product management for a while and over the course of the past 8 years I’ve learned a lot about the difficulties that aren’t apparent in the popular posts promoting product management as a career choice.

Namely,
– that the ability to connect with other humans is under-rated…
– that the development team’s well-being is just as important as your own well-being…
– and, that most customers give you bad, and non-actionable feedback…

So from these realisations and my experience, here are my 5 principles of product management.

1) Focus on the customer problem, not the solution.
Too often I see my peers around getting enthralled by some solution that seems to address every customer need. My first question to these budding PMs is: have you defined the customer problem? The second is how to did you define this? More often than not, the series of answers is: Yes, and Gut Feel.

While I do champion gut feel for making some difficult decisions I don’t think it should be ascribed to critical decisions. Rather, focusing on customer data and customer feedback is the reality when deciding on the solution and then moving onto the solution.

2) A/B testing is validation not truth.
Often I find that junior Product Managers want to focus on testing, testing, testing. From experience, the more important portion is crafting a meaningful hypothesis to test rather than getting a metric to be greater than another.

More often than not, if an experiment becomes wins by a wide margin – something is not right. Give credit to the organisation you joined – most of the low hanging fruit has already been eaten. If you have a huge win, it’s highly probably that it’s a fluke. You measured wrong, your hypothesis doesn’t actually address the customer problem, or it was a seasonal occurrence. I encourage you to take a deep look at the data across multiple facets and see if that huge win (which, by the way gets you plenty of kudos) is actually going to improve your product.

3) Failure is not optional.
As a product manager, you should be constantly questioning your own opinions and deductions on how the world works. Your experience, although great (or limited), doesn’t necessarily match how your customers perform. The only way to craft a winning path is to avoid the obstacles along the way and that means making mistakes — fast — and changing direction quickly.

4) You are the blocker
I’ve found that often the PM is the blocker to progress. Why? Because they don’t communicate their vision well enough. Tell the devs exactly what you want — and what you don’t. Work with them to a great compromise. Coordinate with design on your thoughts — and don’t rely on them to eventually iterate into what you want. Check your pulse with user research and quickly incorporate their feedback. Communicate with other product managers what you plan to do so they can adjust as required.

5) Micro pivots are as effective as major movement
Often you’ll instrument something that shows one of your earlier convictions was wrong or even dead wrong. It’s OK to make a small change that fixes that without changing your main product. Not everything needs to be a huge change with tens of other teams involved. Make a judgement call and commit to the changes that you think are necessary to improve the product and be prepared to turn off that feature flag if it doesn’t work and commit it if it does.

These are just a few of the principles that help to guide me when developing a product. Disagree? Agree? Please head over to me on Twitter @plktio or email, see my contact info, and let me know what you think. I’ll publish the best comments on my blog next week!

Pinebook Pro Review
13 February 2020

Pinebook Pro Review

Short summary: if you’re on the fence about buying the Pinebook Pro as a supplementary laptop for short trips where extreme performance is not necessary, you’d be hard pressed to find a better option for $200.

I’ve been using the Pinebook Pro regularly as my daily driver for the past couple of weeks and wanted to note down some thoughts. But first, specs of the device:

  • [Compute] Rockchip RK3399 SOC with Mali T860 MP4 GPU.
    • Rockchip contains a dual-core Cortex-A72 and quad-core Cortex-A53 in big.LITTLE configuration.
    • Surprisingly this SOC supports hardware virtualisation.
    • It also supports AArch32/AArch64 with backwards compatibility with Armv7.
    • The RK3399 can handle H.264/H.265/VP9 up to 4Kx2K@60fps which is pretty incredible for such a low power chip.
    • Finally, the embedded GPU can support DX11.1 and OpenGL ES/3.1.
  • [Memory] 4GB Low Power DDR4 RAM.
  • [Display] 14.1” 1080p 60Hz IPS display panel
  • [Power] 10,000mAh LiPo battery with USB-C 15W power delivery support and additional power port (5V 3A).
  • [Storage] 64GB eMMC 5.0, bootable microSD card slot.
  • [Connectivity] WiFi 802.11ac and BT 5.0.
  • [Audio] Stereo speakers, headphone jack, and single microphone.
  • [Camera] Single 1080p/2MP front facing camera in display.
  • [Input] Trackpad and ANSI keyboard.
  • [Boot] 128MB SPI boot flash
  • [I/O] 1 USB type C host controller with DisplayPort 1.2 and Power Delivery; 1 USB Type-C port; 1 USB 3.0 port; 1 USB 2.0 port.

Keyboard

The keyboard is of average quality. I’m using the ANSI version. The keys feel mushy on the way down but have quite a bit of spring back allowing you to type reasonably fast. In a typing test between this device and my Macbook Pro 2018, I found myself to type about 5 to 10 WPM faster on average on the Pinebook Pro. The position of the trackpad means that sometimes you’ll inadvertently move your fingers and trigger mouse movement – which sometimes results in your words being scrambled. Turning off the trackpad when an external mouse is connected can help to resolve this issue. There’s no backlighting on the keys.

Unlike the Macbook Pro, I didn’t experience any issues with repeating keys unless I had the repeat delay set low and the repeat rate set high.

Trackpad

The trackpad is a bit small given the available space on the frame. Precision is poor unless you’re making long swipes across the trackpad. Small, minor adjustments are difficult to position so I’d recommend you set the sensitivity low. I expect there to be iterative updates here to the trackpad firmware that will improve the readings.

Build quality of the case

The PBP comes in a hard metal frame. It’s cool to the touch when the device is off and has a beautiful matte black colour. I have no concerns throwing this device in my backpack or upon a desk as the frame seems very capable of protecting the RockChip innards.

If I had a choice of this frame and a durable plastic one, I’d choose the plastic one for an even lighter laptop.

Performance

The big question about this laptop: the performance. I found that I could comfortably watch 720p videos on YouTube and 480p streams on Twitch with live chat visible. The machine seemed also capable of having two streams side-by-side with occassional, but manageable stuttering on each.

By default the performance settings on Firefox is set to use 8 threads which I think is set a bit high considering that there’s only two BIG CPUs. Switching between 2 and 8 doesn’t seem to change the performance. I use a mixture of uBlock Origin and uBlock Matrix to reduce the amount of scripts running in each tab – again, this only seems to affect the initial page load speed – after the webapp is running I didn’t notice a performance difference.

I haven’t tried running any games on this device and don’t plan to – that’s what my Nintendo Switch is for!

Finally, I noticed some high pitched whining when the device was under load. Usually opening a web browser causes this whine. It’s pretty annoying and something that I hope can be fixed in due time.

Audio

The audio quality is passable and has little distortion across the volume range. I would say it’s good enough to understand a movie but not enough to enjoy a movie. Stick to headphones!

Overall

I’d recommend the Pinebook Pro as a great travel laptop if you primarily work in DevOps or live in the command line. It doesn’t grind to a halt under load – it simply slows down and I think you’ll get used to the reduced speed rather quickly.

I think this laptop’s ideal purpose is to serve as a realistic representation of what 99% of your users experience when they use your webapp. Keep it on your desk as a second laptop for testing performance improvements you’re making.

As a representation of the state of Linux on ARM, you can’t go wrong with this laptop.  It’s battery life is an solid 10 to 12 hours.  Suspend isn’t as power performant as on the Macbook so you’ll be shutting down this machine when you’re not using it.

For the hackers and tinkerers inside all of us, you’d be surprised at how easily you can get up and running.  Just download an image, flash to a USB stick or SD card, insert it into the machine, and boot up.  Run through a 5 minute installation and you’ll have a fresh installation very quickly.  However, be warned that if you have a more esteroic setup (and I mean that from a Linux on ARM perspective), such as LUKS on LVM or custom peripherals you need to get working — you’ll be spending a few days wrapping your head around compiling a custom kernel, getting the initial ramdisk built with all the required modules, and dealing with the quirks (i.e. the separate bootloader partition before the EFI partition).

The PBP wiki has a very thorough guide on the internals of the machine and I think with enough time you can enable any scenario on this device.  The forums are teeming with information and you’ll be entering the space with quite a bit of SME that can give you tips on using the machine.

Buy this machine.

Wireguard + other VPNs managed by NetworkManager

I’ve been using a mixture of Wireguard and OpenVPN devices in NetworkManager recently.  As mentioned in my prior post, I mark the Wireguard interface as unmanaged and bring up the interface manually with a shell script.  Most *.ovpn files have redirect-gateway def1 which sends all traffic, except the local LAN traffic, over the VPN.  This means that separate LANs you’ve created outside of the VPN will be unaccessible.

There are a couple ways to resolve this depending on how you have the VPN connection configured.  In my case, I have a *.ovpn file that I import into NetworkManager and the routes are included in the file.  It’s common for OpenVPN files to contain a “redirect-gateway def1” clause which causes all network traffic originating from the client to pass through the OpenVPN server.  (Side note: def1 uses 0.0.0.0/1 and 128.0.0.0/1.

To resolve this, I added a new line into my *.ovpn file to reference the existing Wireguard LAN I created.

route WIREGUARD_SUBNET SUBNET_MASK net_gateway

You can also resolve this by adding a new “via” route to the kernel routing table.

Note — if you followed my guide exactly in the prior post then this route will already exist and no change will be needed.

Configuring Wireguard on the Pinebook Pro in Manjaro Linux
1 February 2020

I recently (Twitter) ordered and received a Pinebook Pro and wanted to share how I got Wireguard working. Wireguard is a VPN that uses modern cryptography while still being easy to configure for various environments.  Unfortunately, even though the kernel module has been merged upstream Manjaro Linux still requires a custom module to be built.  Because the kernel sources aren’t included with the distribution as of now, installing the wireguard-dkms package will fail.  This post shows how I got the userspace wireguard-go program to work in lieu of the kernel module.

Before I continue, if you’re using the default Debian install that came with the device, you should be able to follow this tutorial which uses Cloudflare’s boringtun Rust implementation.  I couldn’t get this tutorial to work so here is an alternative that uses the official Wireguard Go language reference implementation.

Installing the compiler

The Go compiler should be available in all distributions so install it before continuing.  On Manjaro Linux you can do so by typing `sudo pamac install go`.

Cloning the repository

You’ll need to clone to source code from the Wireguard repo: `git clone https://git.zx2c4.com/wireguard-go`.

Building the tool

Once cloning has completed, enter the directory and issue `make`.  After it completes, you should have ./wireguard-go executable in the same directory.

Launching the tool

Open two terminal windows.  In the first, issue sudo LOG_LEVEL=debug ./wireguard-go -f wg0.  This will launch the userspace implementation and create an interface called wg0 which you can see by typing `ip a`.

Configuring and bringing up the Wireguard interface

Bringing up the interface is almost as simple as presented in the docs but because we’re running Manjaro Linux we’ll need to make sure it works well with NetworkManager.  The first step is mark the interface along with any similarly named interfaces as unmanaged.  Create the following file and restart NetworkManager.

/etc/NetworkManager/conf.d/wireguard-unmanaged.conf

[keyfile]
unmanaged-devices=interface-name:wg*

# systemctl restart NetworkManager

In a new terminal window, issue the following commands, taking into account your configuration.  Before continuing you’ll also need to have a valid /etc/wireguard/wg0.conf that uses `wg` syntax not wg-quick syntax.  Check the manpage for wg to confirm.  Note that CLIENT_IP_ADDRESS and PEER_IP_ADDRESS_OR_RANGE refers to the address Wireguard interface address space.

# ip address add dev wg0 CLIENT_IP_ADDRESS peer PEER_IP_ADDRESS_OR_RANGE
# wg setconf wg0 /etc/wireguard/wg0.conf
# ip link set mtu 1420 up dev wg0
# ip route add PEER_IP_ADDRESS_OR_RANGE dev wg0

Finally, as per Thaller’s post on the GNOME blogs, if you don’t issue the last command we’ll need to let NetworkManager know about the new route.  List your current connections with nmcli conn show and copy the UUID for your current connection below.  Replace GATEWAY and WIREGUARD_ENDPOINT with the actual IP addressses.

nmcli connection modify UUID +ipv4.routes "WIREGUARD_ENDPOINT/32 GATEWAY"

This should be sufficient to set up the VPN.  You’ll see the handshake initiated and completed in the other terminal window.

Let me know if this worked for you.  DNS resolution is still problematic because NetworkManager doesn’t adjust resolvconf to accomodate the new route.  If you manage to get that working correctly, please let me know on Twitter.

Finalising the return to WordPress from GatsbyJS
20 January 2020

I started publishing a new blog at plkt.io with GatsbyJS back in the second half of 2019. I was rather enamoured by the ability to live completely in the terminal and publish a beautiful website, of course also checked into git. Over time, I found the maintenance burden to be too much due to the plethora of JS packages pulled in by Gatsby. I’d hear of vunerabilities and there would be breaking API changes in some of the packaged I used.

I committed to migrating back to a simple installation of WordPress in November 2019 and now I’ve pretty much finished the migration.

Note that there are plenty of tools to get WordPress data into Gatsby (notwithstanding the use of Gatsby as a front-end for WordPress) but not that many for the other way around. Thankfully, the new Block Editor in WordPress pretty much enables cut and paste into the editor with images and code portions migrated 1:1. If you find any problems with the migration, please leave a comment and let me know.

Now that the content is over to the new site, I’ll be slowly moving over the custom Gatsby theme I created. Once that’s done, I’ll strip the remaining JavaScript from the site – leaving it as an option for people that want to leave comments.

Happy migrations!

Looking forward to the next 10 years!
31 December 2019

Keywords: new year; 2020; new decade; self-host; apache; ansible

Note: Let’s Encrypt has a rather small rate limit that I transgessed when testing the deployment of my plkt.io w/ GatsbyJS container. This means I won’t be able to get a free HTTPS certificate from them until next week. So for the first week of the New Year, this blog will continue to be hosted on GitHub Pages. I’ll remove this note once the migration is complete!

Wow, what a year it has been. If you’re reading this, it means I’ve successfully completed my first New Year’s Resolution of hosting this website on my own web server. It is now live on a VPS at Hetzner complete with a HTTPS certificate signed by Let’s Encrypt. While this isn’t a huge achievement in itself, I’ve done much more than simply copy the static files to /var/www/html.

Short term plans

The journey to self-hosting this blog took longer than anticipated because I spent a lot of time creating a reproducible setup. This means I can recreate the plkt.io deployment on my local machine for testing with Vagrant and have that same configuration be pushed on “production” aka this web server.

To achieve this, I hand-crafted a series of incremental Dockerfiles that build the container serving this page. Starting from Debian, adding in Apache, and then configuring certbot and building the website from JS. I learned alot about setting up infrastructure via code and having single application containers work well. There’s still quite a bit left to do but for now I consider the first phase of my 2020 plan complete!

In Phase II, I’ll be moving from a simple docker run setup to something more glamorous /ahem/ I mean modern. While the site by itself is rather simple, I do plan to expose more services here, with the next in line being a self-hosted git repo at git.plkt.io. This page will serve as the authoritive git repo behind the site with a mirror of course available on GitHub. Kubernetes to replace my manual invocations (and incantations :)) will will be brought in incrementally and as needed over the course of a few months as I standardise how I roll out services on plkt.io. At the moment, I don’t plan to have multiple VMs running so will likely run each Kubernetes node using Linux containers (LXD).

Longer term plans

In Phase III, running concurrently with Phase II, I’ll be migrating from this static website built with GatsbyJS to one served with WordPress. While some might think this an irrational move, I have a lot of trust in WordPress and believe it’s the less maintenance heavy option. I’ll be migrating the entirety of my blog posts over and likely be breaking some URLs in the process. While it is considered faux paus to break links, I consider it a necessary evil as I ship this blog upon a platform that I’m sure will be for the next ten years.

If you did bookmark a page and it no longer works, try the search functionality on Google here or via WordPress if this site has already been migrated.

Phase IV is where this site really starts to become notable and useful. I’ll be adding two major updates that add a bit of interactivity to the site.

First, I’ll be publishing a blog roll of my favourite blogs via an RSS feed and also putting the snippets live on my site. This doesn’t necessarily mean I approve of everything written, rather, I find it as a directory of like-minded thinkers so people browsing this site can continue to find good content.

Second, I’m going to spin up some data funnels where I can start recording events that happen and present them on the site. Examples include: (a) Git activity by people I follow; (b) self-signed tarballs of things that I’m using in production so people have multiple sources of trust for packages; (c) and perhaps even some stock market analysis and trends.

Overall, I think the additions will do well to improve the usefulness of my website as a hub reflecting what I’m working on and what I’m capable of. In addition, it’s one small step closer to making content discovery easier in lieu of search engine dominance and apathy. More to come in this space!

Phase V is still under wraps. As my knowledge around moving workloads to the cloud, containerising applications, and building infrastructure with code improves, I can envision myself start a cloud consulting business for my local region. Nothing finalised yet but something to shoot for in 2020.

Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson