Jiff Slater
🤔 About
✍️ Contact
02 Dec 2020

Dynasty by Wedgewood – glass piece in House of Cards
11 October 2020

A bit of an unusual post for me – I was rather fascinated by the sets in House of Cards and really wanted to find which wine glass the Underwood’s used at their dining table.  After a lot of searching it seems it’s the Dynasty pattern by Wedgewood.  An antique piece that seems to be available on Replacements.com.


General knowledge and concepts for high-level discussions
22 August 2020

I’ve moved further additions to this page to my /knowledge page.

When discussing complicated topics it can be helpful to have a unified pool of knowledge.  Below is a list of things I try to keep in my conceptual model of the world so I can have fruitful discussions with my peers.


  • Probability distributions and cumulative distribution functions


  • Critical exponents near phase transitions: the universal behaviour of physical quantities near continuous phase transitions.
  • Prosaic AI when we reach AGI
    • The idea that we’ll build an AI that doesn’t reveal anything new about humankind or the world.  Similar to the idea of transformative AI.
  • Re-inforcement learning
  • Local hidden variable theory
  • Chaos and non-chaos
    • Dis-order free, localisation
Pinebook Pro Review
13 February 2020

Pinebook Pro Review

Short summary: if you’re on the fence about buying the Pinebook Pro as a supplementary laptop for short trips where extreme performance is not necessary, you’d be hard pressed to find a better option for $200.

I’ve been using the Pinebook Pro regularly as my daily driver for the past couple of weeks and wanted to note down some thoughts. But first, specs of the device:

  • [Compute] Rockchip RK3399 SOC with Mali T860 MP4 GPU.
    • Rockchip contains a dual-core Cortex-A72 and quad-core Cortex-A53 in big.LITTLE configuration.
    • Surprisingly this SOC supports hardware virtualisation.
    • It also supports AArch32/AArch64 with backwards compatibility with Armv7.
    • The RK3399 can handle H.264/H.265/VP9 up to 4Kx2K@60fps which is pretty incredible for such a low power chip.
    • Finally, the embedded GPU can support DX11.1 and OpenGL ES/3.1.
  • [Memory] 4GB Low Power DDR4 RAM.
  • [Display] 14.1” 1080p 60Hz IPS display panel
  • [Power] 10,000mAh LiPo battery with USB-C 15W power delivery support and additional power port (5V 3A).
  • [Storage] 64GB eMMC 5.0, bootable microSD card slot.
  • [Connectivity] WiFi 802.11ac and BT 5.0.
  • [Audio] Stereo speakers, headphone jack, and single microphone.
  • [Camera] Single 1080p/2MP front facing camera in display.
  • [Input] Trackpad and ANSI keyboard.
  • [Boot] 128MB SPI boot flash
  • [I/O] 1 USB type C host controller with DisplayPort 1.2 and Power Delivery; 1 USB Type-C port; 1 USB 3.0 port; 1 USB 2.0 port.


The keyboard is of average quality. I’m using the ANSI version. The keys feel mushy on the way down but have quite a bit of spring back allowing you to type reasonably fast. In a typing test between this device and my Macbook Pro 2018, I found myself to type about 5 to 10 WPM faster on average on the Pinebook Pro. The position of the trackpad means that sometimes you’ll inadvertently move your fingers and trigger mouse movement – which sometimes results in your words being scrambled. Turning off the trackpad when an external mouse is connected can help to resolve this issue. There’s no backlighting on the keys.

Unlike the Macbook Pro, I didn’t experience any issues with repeating keys unless I had the repeat delay set low and the repeat rate set high.


The trackpad is a bit small given the available space on the frame. Precision is poor unless you’re making long swipes across the trackpad. Small, minor adjustments are difficult to position so I’d recommend you set the sensitivity low. I expect there to be iterative updates here to the trackpad firmware that will improve the readings.

Build quality of the case

The PBP comes in a hard metal frame. It’s cool to the touch when the device is off and has a beautiful matte black colour. I have no concerns throwing this device in my backpack or upon a desk as the frame seems very capable of protecting the RockChip innards.

If I had a choice of this frame and a durable plastic one, I’d choose the plastic one for an even lighter laptop.


The big question about this laptop: the performance. I found that I could comfortably watch 720p videos on YouTube and 480p streams on Twitch with live chat visible. The machine seemed also capable of having two streams side-by-side with occassional, but manageable stuttering on each.

By default the performance settings on Firefox is set to use 8 threads which I think is set a bit high considering that there’s only two BIG CPUs. Switching between 2 and 8 doesn’t seem to change the performance. I use a mixture of uBlock Origin and uBlock Matrix to reduce the amount of scripts running in each tab – again, this only seems to affect the initial page load speed – after the webapp is running I didn’t notice a performance difference.

I haven’t tried running any games on this device and don’t plan to – that’s what my Nintendo Switch is for!

Finally, I noticed some high pitched whining when the device was under load. Usually opening a web browser causes this whine. It’s pretty annoying and something that I hope can be fixed in due time.


The audio quality is passable and has little distortion across the volume range. I would say it’s good enough to understand a movie but not enough to enjoy a movie. Stick to headphones!


I’d recommend the Pinebook Pro as a great travel laptop if you primarily work in DevOps or live in the command line. It doesn’t grind to a halt under load – it simply slows down and I think you’ll get used to the reduced speed rather quickly.

I think this laptop’s ideal purpose is to serve as a realistic representation of what 99% of your users experience when they use your webapp. Keep it on your desk as a second laptop for testing performance improvements you’re making.

As a representation of the state of Linux on ARM, you can’t go wrong with this laptop.  It’s battery life is an solid 10 to 12 hours.  Suspend isn’t as power performant as on the Macbook so you’ll be shutting down this machine when you’re not using it.

For the hackers and tinkerers inside all of us, you’d be surprised at how easily you can get up and running.  Just download an image, flash to a USB stick or SD card, insert it into the machine, and boot up.  Run through a 5 minute installation and you’ll have a fresh installation very quickly.  However, be warned that if you have a more esteroic setup (and I mean that from a Linux on ARM perspective), such as LUKS on LVM or custom peripherals you need to get working — you’ll be spending a few days wrapping your head around compiling a custom kernel, getting the initial ramdisk built with all the required modules, and dealing with the quirks (i.e. the separate bootloader partition before the EFI partition).

The PBP wiki has a very thorough guide on the internals of the machine and I think with enough time you can enable any scenario on this device.  The forums are teeming with information and you’ll be entering the space with quite a bit of SME that can give you tips on using the machine.

Buy this machine.

Finalising the return to WordPress from GatsbyJS
20 January 2020

I started publishing a new blog at plkt.io with GatsbyJS back in the second half of 2019. I was rather enamoured by the ability to live completely in the terminal and publish a beautiful website, of course also checked into git. Over time, I found the maintenance burden to be too much due to the plethora of JS packages pulled in by Gatsby. I’d hear of vunerabilities and there would be breaking API changes in some of the packaged I used.

I committed to migrating back to a simple installation of WordPress in November 2019 and now I’ve pretty much finished the migration.

Note that there are plenty of tools to get WordPress data into Gatsby (notwithstanding the use of Gatsby as a front-end for WordPress) but not that many for the other way around. Thankfully, the new Block Editor in WordPress pretty much enables cut and paste into the editor with images and code portions migrated 1:1. If you find any problems with the migration, please leave a comment and let me know.

Now that the content is over to the new site, I’ll be slowly moving over the custom Gatsby theme I created. Once that’s done, I’ll strip the remaining JavaScript from the site – leaving it as an option for people that want to leave comments.

Happy migrations!

Looking forward to the next 10 years!
31 December 2019

Keywords: new year; 2020; new decade; self-host; apache; ansible

Note: Let’s Encrypt has a rather small rate limit that I transgessed when testing the deployment of my plkt.io w/ GatsbyJS container. This means I won’t be able to get a free HTTPS certificate from them until next week. So for the first week of the New Year, this blog will continue to be hosted on GitHub Pages. I’ll remove this note once the migration is complete!

Wow, what a year it has been. If you’re reading this, it means I’ve successfully completed my first New Year’s Resolution of hosting this website on my own web server. It is now live on a VPS at Hetzner complete with a HTTPS certificate signed by Let’s Encrypt. While this isn’t a huge achievement in itself, I’ve done much more than simply copy the static files to /var/www/html.

Short term plans

The journey to self-hosting this blog took longer than anticipated because I spent a lot of time creating a reproducible setup. This means I can recreate the plkt.io deployment on my local machine for testing with Vagrant and have that same configuration be pushed on “production” aka this web server.

To achieve this, I hand-crafted a series of incremental Dockerfiles that build the container serving this page. Starting from Debian, adding in Apache, and then configuring certbot and building the website from JS. I learned alot about setting up infrastructure via code and having single application containers work well. There’s still quite a bit left to do but for now I consider the first phase of my 2020 plan complete!

In Phase II, I’ll be moving from a simple docker run setup to something more glamorous /ahem/ I mean modern. While the site by itself is rather simple, I do plan to expose more services here, with the next in line being a self-hosted git repo at git.plkt.io. This page will serve as the authoritive git repo behind the site with a mirror of course available on GitHub. Kubernetes to replace my manual invocations (and incantations :)) will will be brought in incrementally and as needed over the course of a few months as I standardise how I roll out services on plkt.io. At the moment, I don’t plan to have multiple VMs running so will likely run each Kubernetes node using Linux containers (LXD).

Longer term plans

In Phase III, running concurrently with Phase II, I’ll be migrating from this static website built with GatsbyJS to one served with WordPress. While some might think this an irrational move, I have a lot of trust in WordPress and believe it’s the less maintenance heavy option. I’ll be migrating the entirety of my blog posts over and likely be breaking some URLs in the process. While it is considered faux paus to break links, I consider it a necessary evil as I ship this blog upon a platform that I’m sure will be for the next ten years.

If you did bookmark a page and it no longer works, try the search functionality on Google here or via WordPress if this site has already been migrated.

Phase IV is where this site really starts to become notable and useful. I’ll be adding two major updates that add a bit of interactivity to the site.

First, I’ll be publishing a blog roll of my favourite blogs via an RSS feed and also putting the snippets live on my site. This doesn’t necessarily mean I approve of everything written, rather, I find it as a directory of like-minded thinkers so people browsing this site can continue to find good content.

Second, I’m going to spin up some data funnels where I can start recording events that happen and present them on the site. Examples include: (a) Git activity by people I follow; (b) self-signed tarballs of things that I’m using in production so people have multiple sources of trust for packages; (c) and perhaps even some stock market analysis and trends.

Overall, I think the additions will do well to improve the usefulness of my website as a hub reflecting what I’m working on and what I’m capable of. In addition, it’s one small step closer to making content discovery easier in lieu of search engine dominance and apathy. More to come in this space!

Phase V is still under wraps. As my knowledge around moving workloads to the cloud, containerising applications, and building infrastructure with code improves, I can envision myself start a cloud consulting business for my local region. Nothing finalised yet but something to shoot for in 2020.

Some Docker and Vagrant tips
11 December 2019

While I’m slowly migrating my blog over to WordPress, I’ve been experimenting with build environments on my Mac and Linux server using a combination of Docker, Vagrant, and Ansible. Here are some of the tips I’ve compiled to improve my effectiveness using these tools.


Building and running the current context with

docker run --rm `docker build -q .`

Cleaning up all the abandoned containers.

docker system prune -a

At some point you’ll find that you want to combine multiple builds into a single file. Luckily Docker now allows multi-stage builds, see here.


Vagrant can work together with Ansible if you add the playbook information in your Vagrantfile, like so:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "../ansible/site.yml"

The login information will automatically be copied over so Ansible can run on the virtual machine. You can also view the config by running vagrant ssh-config in the directory of the virtual machine.


When using Ansible, it’s best to split up the tasks into roles so you can reuse them later. A typical directory layout looks like:

./site.yml [playbook]: starts the entire environment.
./hosts [yaml or INI file]: contains the list of hosts.  I prefer to use the YAML format here.
./roles [directory]
./roles/common [directory]: common tasks to run on each server
./roles/common/handlers [directory]: handlers specific to common but can be used anywhere
./roles/common/tasks [directory]: list of tasks to be executed by this role
./roles/common/files [directory]: any files that need to be deployed
./roles/common/templates [directory]: a collection of templates that can be deployed with the common role
./roles/common/vars [directory]: variables (that aren't in the defaults) for the common role
./roles/common/defaults [directory]: default values for the variables for the common role deployment
./roles/common/meta [directory]: configuration files related to managing ansible

I would spend a bit of time understanding static and dynamic importing (more information here) as I think the added flexibility has the potential to cause difficult to debug errors in deployments.

Roles are then included just like any other task.


- hosts: webservers
  - include_role:
      name: foo_app_instance
Returning to WordPress
30 November 2019

I’ve been using Gatsby for the better part of this year to host my blog. I get a nice static blog that can become interactive relatively easily. Unfortunately, I think the JS ecosystem is moving a bit too quickly for me to maintain with all my other commitments. So I’m going to swap that maintenance headache for another 🙂 and move to WordPress.

Over the coming months, I’m going to be moving my blog to WordPress. As time permits, I’ll be detailing my progress here. My initial conception is a simple stack with a cheap CDN in front of my WordPress instance in a reproducible setup so I can work offline when needed.

I expect to learn a lot about the broader container ecosystem and will be paying close attention to CNCF publications and videos. Expect the page size to go down and the performance to stay around the same. I’ll also be working on my PHP knowledge to add back in interactivity to this site.

On the bright side, I’ll be able to start monitoring blog post performance again without javascript calls each time the page loads which is a boon for privacy and performance.


Two main goals: (1) create a setup I can quickly replicate locally and on different cloud providers; and (2) learn more than I can handle about the Kubernetes “revolution.”

Initially, this will only host my WordPress blog (the future of this blog).

A late goal is some time of notification system when new builds are pushed.

Orchestration software

I’ve landed on the following stack:

  • Kubernetes, to control spinning up and down of virtual machines. We’ll force k8s to use a single machine.
  • Vagrant, to build and manage the virtual machine environment. It also abstracts the available hypervisors on different operating systems.
  • Ansible, to prep the container which will head into the container registry.
  • minikube for testing locally
  • VMWare Fusion for running virtual machines.
Understanding Mac specifics
29 November 2019

I use a Mac as my main dev box for better or for worse. I also use a mini-PC for deployments I want accessible to all my devices. To orchestrate all these devices, I use a combination of Vagrant, for virtual machine provisioning, Ansible, for writing playbooks to get the ahem required ahem Docker installed, and Docker to handle my containers. I am also building a local container registry which houses custom builds of various programs. Of course, all this is run locally by minikube, the Kubernetes local cluster.

It’s quite a complicated setup. I previously used LXD to manage my containers but it’s interaction with the system via snapd was difficult to debug. Therefore, I decided to fully embrace, as Elon puts it, massive spire in the topological map of technological advancements (from this interview).

Before installing all these programs, I took some time to look at how services and networking is managed in macOS as I just knew I would run into some configuration problems down the road.


Most of macOS networking is sufficiently handled in the System Preferences Networking pane. For quick terminal look up of the network interfaces, you can use the netstat and ifconfig tools. On modern distros, these tools are supplanted by ss and ip respectively.

System services

launchd manages the daemons, applications, processes, and scripts in macOS. It’s not as powerful as a modern installation of systemd but it just works, so to speak. Agents and daemons are usually stored in the /Library set of directories: ~/Library for local agents, /Library for global agents, and /System/Library for system agents.

These directories contain XML files that specify what they want launchd to do for a particular service. Go ahead and run launchctl list to see the list of loaded services. Like systemctl you can run launchctl enable <service> to start something and with disable to disable it.

When I’m troubleshooting problems it can be informative to list all the non-Apple agents with launchctl list | grep -v 'com.apple'.

launchctl list | grep -v 'com.apple'
PID     Status  Label
355     0       com.wacom.DataStoreMgr
1774    0       com.microsoft.edgemac.Canary.18720
780     0       com.openssh.ssh-agent
-       0       com.wireguard.macos.login-item-helper
-       0       com.microsoft.update.agent
352     0       com.wacom.wacomtablet
397     0       com.wireguard.macos.18228
-       0       com.valvesoftware.steamclean
598     0       com.vmware.fusion.15568
372     0       com.manytricks.Moom.13396
3066    0       com.microsoft.VSCodeInsiders.18956
781     0       org.mozilla.firefox.15760
-       0       com.oracle.java.Java-Updater
1248    0       org.mozilla.thunderbird.15376
3087    0       com.microsoft.VSCodeInsiders.ShipIt
1320    0       com.microsoft.onenote.mac.18712

Let’s say I want to see the status of VSCode — I can use the gui service specifier.

launchctl print gui/501/com.microsoft.VSCodeInsiders.ShipIt
com.microsoft.VSCodeInsiders.ShipIt = {
	active count = 1
	path = (submitted by Electron.3066)
	state = running

	program = /Users/user1/Applications/Visual Studio Code - Insiders.app/Contents/Frameworks/Squirrel.framework/Resources/ShipIt
	arguments = {
		/Users/user1/Applications/Visual Studio Code - Insiders.app/Contents/Frameworks/Squirrel.framework/Resources/ShipIt

	stdout path = /Users/user1/Library/Caches/com.microsoft.VSCodeInsiders.ShipIt/ShipIt_stdout.log
	stderr path = /Users/user1/Library/Caches/com.microsoft.VSCodeInsiders.ShipIt/ShipIt_stderr.log
	inherited environment = {
		Apple_PubSub_Socket_Render => /private/tmp/com.apple.launchd.ZqVikU4Yim/Render
		SSH_AUTH_SOCK => /private/tmp/com.apple.launchd.lx5o1GBYry/Listeners

	default environment = {
		PATH => /usr/bin:/bin:/usr/sbin:/sbin

	environment = {
		XPC_SERVICE_NAME => com.microsoft.VSCodeInsiders.ShipIt

	domain = com.apple.xpc.launchd.user.domain.501.100009.Aqua
	asid = 100009
	minimum runtime = 2
	exit timeout = 5
	nice = -1
	runs = 1
	successive crashes = 0
	pid = 3087
	immediate reason = semaphore
	forks = 0
	execs = 1
	initialized = 1
	trampolined = 1
	started suspended = 0
	proxy started suspended = 0
	last exit code = (never exited)

	semaphores = {
		successful exit => 0

	event triggers = {

	endpoints = {
		"com.microsoft.VSCodeInsiders.ShipIt" = {
			port = 0xb6d9f
			active = 0
			managed = 1
			reset = 0
			hide = 0

	dynamic endpoints = {

	pid-local endpoints = {

	instance-specific endpoints = {

	event channels = {

	sockets = {

	spawn type = daemon
	spawn role = (null)
	jetsam priority = 3
	jetsam memory limit (active) = (unlimited)
	jetsam memory limit (inactive) = (unlimited)
	jetsamproperties category = daemon
	submitted job. ignore execute allowed
	jetsam thread limit = 32
	cpumon = default

	properties = {
		partial import = 0
		launchd bundle = 0
		xpc bundle = 0
		keepalive = 0
		runatload = 0
		low priority i/o = 0
		low priority background i/o = 0
		legacy timer behavior = 0
		exception handler = 0
		multiple instances = 0
		supports transactions = 0
		supports pressured exit = 0
		supports idle hysteresis = 0
		enter kdp before kill = 0
		wait for debugger = 0
		app = 0
		system app = 0
		creates session = 0
		inetd-compatible = 0
		inetd listener = 0
		abandon process group = 0
		one-shot = 0
		event monitor = 0
		penalty box = 0
		pended non-demand spawn = 0
		role account = 0
		launch only once = 0
		system support = 0
		app-like = 0
		inferred program = 1
		joins gui session = 0
		joins host session = 0
		parameterized sandbox = 0
		resolve program = 0
		abandon coalition = 0
		high bits aslr = 0
		extension = 0
		nano allocator = 0
		no initgroups = 0
		start on fs mount = 0
		endpoints initialized = 1
		disallow all lookups = 0
		system service = 0

However, what about the sneaky java auto-updater? It’s not running now but surely it’s scheduled to phone home at some point in the next few minutes ;). How do I check the run frequency of it? We can look at the property list file to find more information. Querying launchctl seems to change between releases so what works on Mojave might not work in Catalina.

Java’s property list file is installed globally in /Library/LaunchAgents.

 $ cat /Library/LaunchAgents/com.oracle.java.Java-Updater.plist 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
        <string>/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Resources/Java Updater.app/Contents/MacOS/Java Updater</string>

It’s scheduled to run weekly. We can disable the auto-updater with launchctl remove <LaunchDaemon> — in this case launchctl remove com.oracle.java.Java-Updater. Now the java auto-updater no longer appears in the list of LaunchAgents.

Working without Internet
26 November 2019

This is a work in progress article. You can contribute to its completion via GitHub (see commit ID in the footer.)

I’ve been travelling around the UK over the past week without Internet and I had (a lot) of time to reflect upon this working style: GTD with no Internet access. If you have a computer with a reasonable amount of space or some sort of mini computer like the Raspberry Pi connected to a large external hard drive, you can follow this guide to set up a travel optimised working environment.


  • Download what you need ahead of time without prejudice.
  • Build a portable and reproducible development environment. Even better if pushing a Git repository once you have Internet access is all you need to do to push to production.
  • Empower yourself to do offline analysis of logs and data with Jupyter and local databases.



To get started, you’ll need a computer that can either run multiple virtualised environments or containers in parallel. Any modern CPU that can run at around 1.8 to 2.2 GHz under sustained load, has at least four cores, and supports virtualised extensions such as VT-x should be more than capable. In addition, you’ll need about 12 to 16GBs of RAM. Having more might be helpful but we’ll focus on keeping that battery running cool and for a long time.

Another alternative is a Raspberry Pi Zero in this sweet case. Make sure you use a well-endowed power supply and an SD card with built-in wear levelling like the Sandisk Extreme.


You should have around 512GB or more of storage you can take with you. For most people this will be an external hard drive but some laptops are now shipping with multi-terabyte configurations. These storage destinations will house your data dumps, virtual machines, and repository mirrors.

Operating System

You can use Windows with WSL2 + Docker/LXC; Mac OS with a Raspberry Pi or Docker (shudders); or Linux with Linux containers or Docker. On Mac OS, I’d recommend you shell out for a decent virtualisation UX like VMWare Fusion. I don’t have much experience with VirtualBox on Mac.


You’ll be downloading a lot of things so spend some time to think about your data management. You can set up a separate partition or volume to store this data and use a different filesystem the optimises for fast reads.


After all that work, you may want to curl up with a book. Whether you’re using an e-book reader or printed copies, you can find a lot of items to read on Project Gutenberg.


Most of these services are served over HTTP so you’ll need to have a web server handy. Anything basic will do – be it Apache, Nginx, or even node.js. For more advanced installations like MediaWiki, you’ll need PHP support so for convenience and ease of maintenance it’ll be best to use Apache + PHP.


Offline maps are useful for trip planning and routing. For most cases, it’ll be enough to just download some form of Open Street Maps on your phone. In this case, we’re going to set up OSM for trip planning on your device.


Quick setup

To quickly get set up without worrying about database dumps, just grab the latest Kiwix distribution for your operating system. You get the added benefit of enabling different Wikimedia projects such as Wikitionary.

Database dumps

Begin by downloading Wikipedia. It’s easier than you might think. You can grab a multistream archive from here. Pick the language that you think has the articles you’ll read most frequently. Multistream enables you to access specific articles without having to decompress the entire archive. If space isn’t an issue, then grab the normal archive; I imagine the multistream archive requires more CPU to process.

If you work a rapidly emerging field like IT, then I’d recommend you download a recent database dump. If you prefer well-written and vetted articles, then you can get the download released by the WikiMedia Foundation every six months.

This page contains BitTorrent links for English Wikipedia. If you can’t or prefer not to use torrents, then you can look at the list of mirrors here and grab a compressed XML dump.

For multistream, make sure you download the xml.bz2 and index files. Also grab the SHA1 checksums so you can check for errors. Finally, to avoid getting banned, only download one file using one connection at a time at any time; otherwise you may be forced to use an out-of-date mirror.

Media database dump

In addition to the database dump, you’ll need the

MediaWiki front-end

To view the articles you’ll need a front-end – MediaWiki is the natural choice and you’ll need a LAMP stack to serve it. We’ll set that up later.

Debian mirror

If you use Linux, you’ll probably need a package mirror available offline should you decide to install some software. Download the entire Debian amd64 architecture package mirror is about 402GB compiled and 91GB for the source. You’ll get more than a 4x speed up using pre-compiled packages so it’s worth downloading the entire repository over the course of a few days. To augment your installation, it likely makes sense to grab the Debian wiki as well — something we’ll address in the Searx section.


If you know you’re going to be gone for a while, it makes sense to set up some background tasks to search for relevant keywords and fetch the pages such as StackOverflow for later perusal.

Jupyter notebooks

Think about your web habits. How often do you turn to the web to complete a quick calculation? Maybe you type your query directly in Google or maybe you head over to Wolfram Alpha to solve some equations. Jupyter can help here. It is an open source web application that lets you create a realtime workbook of equations. Think about lab reports, but interactive.

I’ve found it invaluable to generate charts, write perfect looking equations, and just get work done.

In this section, we’ll set up a base Jupyter instance. If you search the web you’ll find many others.



I’m sure you’ve went on GitHub and clicked “download raw file”. While this is completely find for one-off never-use-again repositories, if you find yourself going repeatedly visiting a repository, go ahead and ween yourself off the web interface and clone the repository locally.


How do you take notes? I personally have used a mixture of OneNote, TiddlyWiki, and other solutions in the past. I’m spoiled by syncing and have found it a boon to my productivity. However, modern versions of some of these tools, like OneNote, work best with a constant web connection. A good alternative is to set up an offline MediaWiki instance.

Switching to zsh from bash
22 November 2019

In preparation from my migration to Catalina this weekend, I put together a quick list of changes to keep in mind when using Bash over Zsh. Keep in mind that a modern build of Bash has most of these features but I’ll always jump at the opportunity to learn something new.

Here’s my humble original bashrc file (GitHub).

source $HOME/.aliases

# Read our local bashrc.
source $HOME/.bashrc_local

# Set environmental variables.

# Turn off system bell if we're not in an SSH session.
if [ -z ${SSH_TTY+x} ]; then
  if [ -z ${DISPLAY+x} ]; then 
    command -v setterm
    if [ $? -eq 0 ]; then
      setterm --blength 0
  else xset -b

# Create the prompt.
PS1="$%{BLUE%} $ $%{OFF%}"

This file also turns off the system bell – very useful for new machines or virtual machines. .bashrc_local isn’t checked in version control. It contains the emoji for the specific machine. In the case of Apple it’s just: ‘EMOJI=’.

Based on stepping through a lot of ZSH configs (see footer) here’s my changes.

Migrating existing bash config to zsh

To prepare myself for the migration I spent an hour or so reading man zshmisc. First I copied my bashrc into a new file named ~/.zshrc. Then, because the colour escape sequences weren’t working, I updated the BLUE and OFF lines to be compatible.

To test colours, I made heavy use of print -P '<escape sequence>' and wrapping the colours in %{...%}.

  • BLUE=”%F{blue}”
  • OFF=”%f”
  • PROMPT=”\$”
  • PS1=”${EMOJI} %{${BLUE}%}${PROMPT}${OFF} “

As my original config was very simple, this was enough.

Taking advantage of ZSH functionality

Now zsh has alot of built in functionality that I was reluctant to take advantage of for compatibility reasons — however, I spend a lot of time in the terminal so I found this few changes a good compromise between a supercharged terminal and a minimalist configuration.

Start by reading man zshbuiltins.


Configure options by running setopt OPTIONNAME and setopt noOPTIONNAME to unset. http://zsh.sourceforge.net/Doc/Release/Options.html#Options


Automatically move into a new directory if only that name is typed at the prompt.

$ pwd
$ /home/local
$ bin
$ pwd
$ /home/local/bin


All zsh sessions append their command history to a single file instead of replacing it. Note that the history file isn’t updated until that zsh instance exits.


Also save the command’s timestamp and duration in the history file. Extremely useful for timing commands.


Don’t store invocations of the builtin history in the history file.


Completion system

I opted to start using the autocompletion system for smarter tab completion.

$ ls -<TAB>              
-1  -- single column output
-A  -- list all except . and ..
-B  -- print octal escapes for control characters
-C  -- list entries in columns sorted vertically
[.. snip ..]


zsh includes a lot of built-in modules to further change the behaviour of the shell. I didn’t opt to enable them in my config.

Final configuration

Here’s my final zsh config with all the above taken into account. I imagine 3 months down the road this file will have even more changes. You can always check the latest in xocite/dotrc/.zshrc @ GitHub.

# Read our aliases file.
source $HOME/.aliases

# Read our local zshrc.
source $HOME/.zshrc_local

# Set environmental variables.

# Turn off system bell if we're not in an SSH session.
if [ -z ${SSH_TTY+x} ]; then
  if [ -z ${DISPLAY+x} ]; then 
    command -v setterm
    if [ $? -eq 0 ]; then
      setterm --blength 0
  else xset -b

# Set options
setopt AUTO_CD

# Load functions
autoload -U compinit && compinit

# Create the prompt.
PS1="${EMOJI} %{${BLUE}%}${PROMPT}${OFF} "

Further reading

  • zsh FAQ link
  • Peter Karbowski’s zsh config link
  • zsh manual link
Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson