logo
Jiff Slater
🤔 About
✍️ Contact
📚Knowledge
01 Dec 2020
 

A short guide to setting up Matrix on Linux
22 November 2020

I’ve been really fascinated by Matrix lately – it’s a set of APIs that makes it super easy to have decentralised chat rooms – even across disparate services like WhatsApp, Discord, Telegram, and Mattermost.  I wanted to set it up and see the performance on a VM running Docker.  Here’s how I configured the connection in a few simple steps.

Provisioning the VM

First I provisioned the Virtual Machine that would run the Synapse container.

Download Debian, verify signature, and set up a KVM in headless mode with port forwarding.

cd ~
mkdir iso hdd
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.6.0-amd64-netinst.iso -O iso/debian-10.6.0-amd64-netinst.iso
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/SHA1SUMS -O iso/SHA1SUMS
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/SHA1SUMS.sign -O iso/SHA1SUMS.sign
gpg SHA1SUMS.sign
gpg --keyserver keyring.debian.org --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B
gpg --verify SHA1SUMS.sign SHA1SUMS
sha1sum iso/debian-10.6.0-amd64-netinst.iso
qemu-img create -f qcow2 hdd/debian-docker-root.qcow2 64G -o nocow=on
qemu-system-x86_64 -m 8G -enable-kvm -cpu host -sandbox on -smp 2 -name 'docker-host' -hda $HOME/hdd/debian-docker-root.qcow2 -cdrom $HOME/iso/debian-10.6.0-amd64-netinst.iso -netdev user,id=net0 -device e1000,netdev=net0

Run through the installation process.  I set 8GB of RAM so the installer automatically creates a reasonable swap partition.  I usually prefer to go through the process manually and then snapshot the rootfs.  I didn’t see any advantage of using OVMF UEFI firmware for this demo.

I set my hostname to ‘dkr’ and domain name to redshift7.  I disabled root by not entering a password.  For partitioning, I used the entire disk with all files in a single partition with a ext4 filesystem.  I created a 4G swap partition.

Next, I created a snapshot of the root filesystem so I could have a base image for the virtual machine.

qemu-img create -f qcow2 -b hdd/debian-docker-root.qcow2 debian-docker-root-s1.qcow2 -o nocow=on

I launched the VM again, this time with the new derivative image.  I also exposed a port for SSH to work on the external interface so I could install docker using an Ansible playbook.

qemu-system-x86_64 -m 8G -enable-kvm -cpu host -sandbox on -smp 2 -name 'docker-host' -hda $HOME/hdd/debian-docker-root.qcow2 -netdev user,id=net0,hostfwd=tcp::20022-:22 -device e1000,netdev=net0

Next, I installed Docker using the below lines.

sudo apt update
sudo apt install git apt-transport-https ca-certificates wget software-properties-common gnupg2 curl python-apt
sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
sudo apt update
sudo apt install docker-ce
sudo gpasswd -a local docker
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl start containerd
sudo systemctl enable containerd
sudo wget https://github.com/docker/compose/releases/download/1.27.4/docker-compose-Linux-x86_64 -o /usr/bin/local/docker-compose
sha256sum /usr/bin/docker-compose # check for 04216d65ce0cd3c27223eab035abfeb20a8bef20259398e3b9d9aa8de633286d
sudo chmod a+rx /usr/local/bin/docker-compose

Next I followed the excellent Synapse guide for getting the basic server configured.  I chose a server name population3 and the storage as SQLite.  I also changed the default system version of Python to Python 3.

sudo apt install virtualenv build-essential python3-dev libffi-dev python3-pip python3-setuptools sqlite3 libssl-dev virtualenv libjpeg-dev libxslt1-dev
mkdir ~/synapse
virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse # this command shows some crazy graphs
cd ~/synapse
python -m synapse.app.homeserver --server-name localhost --config-path homeserver.yaml --generate-config --report-stats=no
synctl start

Next, I edited the produced homeserver.yaml file to make it a bit easier to debug the initial setup.

server_name: “localhost:8008”

As I’m running this in a virtual machine, I opened the QEMU monitor and added an additional port for redirection. You can check existing port forwards in the monitor by using “info network“ and “info usernet“ and edit them by using “hostfwd_add tcp::8008-:8008“.

Next you should be able to access your instance via http://localhost:8008

Pending sections

The pervasiveness of microabrasions
16 November 2020

**DRAFT – Last Updated 2020-11-16**

The human body has multiple mechanisms to stay adaptable in the ever-changing present.  At the cellular level, there’s an evolving protection against the constant marching battery of micro-organisms.  This protection keeps you alive.  At the macro level, there’s your own personal drive to survive.  These two halves come together to form a cohesive whole – you.

However, these adaptions only work to keep you alive, in an acceptable homeostasis of which the bar is rather low.  Fed regularly?  Check.  Have a shelter?  Check.  Got a job?  Check.  Feel fulfilled?  …  There’s a class of constant onslaughts that we’re not well-adapted for – or you could say we’ve adapted to them inadequently – maladaptations.

These minor antagonists can’t usually be used to propel you into a better place, rather, they’re constantly sanding away your drive to become fulfilled.  Daily unpleasantries you dismiss or accept; an over-extended caffeine habit; a day-ending nightcap; a small lack in regularly activity; less sleep than usual; or a chair that isn’t a good fit.  Truth is, most of our day-to-day activity is structured around the same interactions with the world around us in order to nurse a homeostatic environment.

Due to our maladaptions, we either deny (recognise and reject) or accept (merge with our definition of the world) the things that slowly erode the internal concept of fulfillment – day after day, the concept of a satisfactory life reduces to the idea of a simple, monotonous, manual life of labour where the rewards are self-evident (no introspection or reflection required) and real.

While the idea of disconnecting from it all may sound pleasant, it would onyl be a travesty of the modern life that we’re capable of.  We’re Thinkers, constantly creating abstractions out of language out of language out of language.  Modern computer programming is all about collaboratively editing a shared mental model or map of how a computer works at a specific level of abstraction.  There’s no emotion here even though it’s methological and expressive.  But at the end of the day, every computer engineering knows that a compiler (also an abstraction) will take the inputs and optimise them away for better execution in the more accurate computer model.

For modern language – the maxims of “if it bleeds, it reads” and “emotion deserves promotion” seem to be interwoven into all common watering holes.  This modern way of writing seems to cut through carefully built abstractions which help to promote a shared understanding of the world and antagonise people into a reactive state.  There’s no value here: no deeper meaning, no verbalisation that can define (label) the angst.

Connecting a physical NIC to a Qemu Linux Guest
12 November 2020

Enabling IOMMU to directly connect a physical card to a guest used to be a painful and error prone task. I remember in the past having to play with the Access Control Services to get anything to work. Now most of the Intel and AMD devices support it – at least partially.

I recently started building my virtual homelab and needed to have a network accelerated guest to handle my VM traffic. This guest would be running on the host along with the other VMs but would have direct access to the second network card in my PC. This way – virtual traffic is routed out of one access card and the host’s traffic is routed out of a different card. Physical separation – at least in theory. In another post I’ll discuss the VLAN I configured at the physical router’s end to enable physical separation between the two network cards.

Before continuing, I recommend you read the Errata for your CPU (see mine: Intel Xeon E3-1200) to see if there’s any known (and most likely unfixable) issues that will impact the use of physical devices instead virtual guests. I checked my IOMMU status using lspci -vv and looked for the ACSCap line under my PCI Express Root.

Using this handy guide on InstallGentoo I ran the following to list my IOMMU groups which I would then be isolating and handing off to the VM.

$ for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d); \ 
do echo "IOMMU group $(basename "$iommu_group")"; for device in $(ls -1 "$iommu_group"/devices/); \
do echo -n $'\t'; lspci -nns "$device"; done; done

Here my network card was located under IOMMU group 18 (and my graphics card under IOMMU group 1 but that’s for another day :)).

There were a few options for the next step – either I could have written a small systemd service that runs before the network is up or I could add an entry to my modprobe config in /etc to bind the dummy driver to the interface – I opted for the latter, using the device ID from the above scriptlet.

IOMMU group 18
05:00.0 Ethernet controller [0200]: Intel Corporation I2210 Gigabit Network Connection [XXXX:YYYY] (rev ZZ)
# echo "options vfio-pci ids=XXXX:YYYY" | tee -a /etc/modprobe.d/vfio.conf

You can either modprobe vfio-pci to have this happen (you should see the kernel driver in use be vfio-pci when using lspci -v) or unbind using /sys.

First list the device directory in /sys/bus/pci/devices/<domain:bus:device:fun>, checking for the driver directory.

# ls /sys/bus/pci/devices/0000\:05\:00.0/ 
[...] driver [...]

Then issue an unbind command to the driver.

# echo XXXX:YYYY > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind

Now your Ethernet controller is ready to be handed to the VM.  Simply add -device vfio-pci,host=05:00.0 to your QEMU command line.

More information

Dynasty by Wedgewood – glass piece in House of Cards
11 October 2020

A bit of an unusual post for me – I was rather fascinated by the sets in House of Cards and really wanted to find which wine glass the Underwood’s used at their dining table.  After a lot of searching it seems it’s the Dynasty pattern by Wedgewood.  An antique piece that seems to be available on Replacements.com.

 

Installing Gentoo in 2020
27 September 2020

I had my once a year conundrum recently – install Gentoo or Ubuntu. This year I chose Gentoo and was immediately awash with nostalgia from seeing this screen.

The last time I ran Gentoo seriously was back in 2008 and now to see this in all its glory really brings me back.

In memory of those good times, I wanted to document the installation process on a modern 2020 PC.  Overall, I would say not much has changed except the availability of systemd as a init option.

Initial setup

I began by downloading the Gentoo minimal install CD. For AMD64 it’s available here.  As with all Gentoo installations it’s wise to have the handbook nearby for support.  It’s available here for AMD64 here.

I burned the CD image to a USB stick using Etcher by Balena.  It’s not my favourite tool but it’s reliably despite the ads.  dd is also another alternative.

Next, I rebooted in the the USB stick via the UEFI boot menu and chose the default Grub menu.  If you have a wired network configured then in most cases the network will already be working and configured with DHCP.

Configuring disks

The hardest part about configuring a fresh installation of Linux is deciding on the storing configuration.  The usual deciding factors are tolerance for risk and available storage devices.  In my case, I have a SSD solely dedicated to Linux and a pair of HDDs for longer term storage.

livecd ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 390.1M 1 loop /mnt/livecd
sda 8:0 0 1.8T 0 disk 
└─md156 9:126 0 3.8T 0 raid0 
├─md156p1 259:0 0 128M 0 part 
└─md156p2 259:1 0 3.8T 0 part 
sdb 8:16 0 1.8T 0 disk 
└─md156 9:126 0 3.8T 0 raid0 
├─md156p1 259:0 0 128M 0 part 
└─md156p2 259:1 0 3.8T 0 part 
sdc 8:32 0 460.8G 0 disk 
├─sdc1 8:33 0 512M 0 part 
├─sdc2 8:34 0 244M 0 part 
└─sdc3 8:35 0 460G 0 part 
sde 8:64 1 14.6G 0 disk 
├─sde1 8:65 1 427M 0 part /mnt/cdrom
└─sde2 8:66 1 6.4M 0 part 
sr0 11:0 1 1024M 0 rom

Here you can see I have two 2TB drives already in RAID from an existing Linux installation, the USB stick mounted at /mnt/cdrom and the rootfs for the livecd mounted at /mnt/livecd.

For the filesystem, I usually select btrfs for the SSD and btrfs in RAID1 for the HDDs.  This gives me the best balance of recoverability, reliability, and performance for my usecases.  However, because I’m curious to try some of the latest technology this time round I’m going btrfs across all my drives.

Here’s the storage configuration:

  • SSD: btrfs on LUKS with GPT
    1. EFI partition, 512M
    2. boot partition, 512M
    3. swap partition, 32G
    4. root partition, remaining space
  • HDD: btrfs on LUKS (hdd1 & hdd2) with GPT
    • EFI partition, 512M, empty
    • root partition, remaining disk

Configuring the SSD

First before partitioning I usually erase the SSD.  I have more than normal faith in the ATA secure erase or NVMe secure erase and commonly use it before partitioning drives.  hdparm -I /dev/<drive> quickly tells me what my options are.   In case I’m told my drive is frozen, a quick hotplug (while the computer is on) will unfreeze the device so you can send the secure erase command.  I also had to make sure that the BIOS had the drives connected using AHCI.

Next I set up the partition table.  MBR with DOS disklabel is deprecated (and has been for a while) so I’ll use GPT.  I want the disk to be bootable in any case so I usually use a /boot partition that’s unencrypted, along with EFI and rootfs partitions.  This setup can be changed later to a completely encrypted drive with boot USB.

After the partition table was created, it looked something like this:

livecd ~ # fdisk -l /dev/sdc
Disk /dev/sdc: WW GiB, XX bytes, YY sectors
Disk model: WDC WXXXXXXXXX
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: ZZ-ZZ-ZZ-ZZ-ZZ

Device Start End Sectors Size Type
/dev/sdc1 2048 1050623 1048576 512M EFI System
/dev/sdc2 1050624 1550335 499712 244M Linux filesystem
/dev/sdc3 X Y Z 450G Linux filesystem
/dev/sdb4 X Y Z 32G Swap

mkfs.fat -F32 /dev/sdc1 (for the EFI system partition)

mkfs.ext2 /dev/scdc2 (for the boot partition)

Next I’ll turn the last partition into a LUKS container by using the cryptsetup command.  Read more about LUKS here.

First the performance of the disk is evaluated by using cryptsetup benchmark.  From the results and my risk profile, I can choose the configuration that makes the most sense for you.

cryptsetup --key-size 512 --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id luksFormat /dev/sdc3
cryptsetup luksOpen /dev/sdc3 xtsroot

Next, I’ll set up btrfs on the encrypted root filesystem.

mkfs.btrfs -L broot /dev/mapper/xtsroot

I created a btrfs block device with label broot.  Next, I mounted it with light compression turned on along with disk quotas.  I made sure to have the rootfs and storage directories beneath the toplevel directory for ease of snapshots.

mkdir /toplevel
mount -o compress=lzo /dev/mapper/xtsroot /toplevel
# btrfs quota enable /toplevel # we will not be enabling this yet.
btrfs subvolume create /toplevel/rootfs
btrfs subvolume create /toplevel/rootfs/gentoo
btrfs subvolume create /toplevel/savestate
btrfs subvolume create /toplevel/storage
btrfs subvolume create /toplevel/storage/home
btrfs subvolume create /toplevel/storage/home/user
mount -o subvol=/rootfs/gentoo,compress=lzo /dev/mapper/xtsroot /mnt/gentoo

Finally, I turned on swap from the filesystem I made earlier.

mkswap /dev/sdc4
swapon /dev/sdc4

Installing Gentoo

Before continuing I made sure the date is correct – date.  In most cases, it will already be set correctly.  Next, I downloaded the multilib tarball from the Gentoo’s website.

wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/20200920T214503Z/stage3-amd64-systemd-20200920T214503Z.tar.xz \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.CONTENTS.gz \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.DIGESTS \
wget https://bouncer.gentoo.org/fetch/root/all/releases/amd64/autobuilds/current-stage3-amd64/stage3-amd64-20200923T214503Z.tar.xz.DIGESTS.asc
tar xpvf stage3-amd64-systemd-20200920T214503Z.tar.xz --xattrs-include='*.*' --numeric-owner
nano -w /mnt/gentoo/etc/portage/make.conf

For my compilation options, I chose the following: MAKEOPTS=”-j2″; EMERGE_DEFAULT_OPTS=”–jobs=3″; and COMMON_FLAGS=”-Ofast -flto -pipe -march=native -funroll-loops”.

Next, I continued following the guide by selecting mirrors and the ebuild repository.  Finally, I chrooted into the new directory as per the Gentoo handbook.

For my profile, I opted to use default/linux/amd64/17.1/systemd. Next I set my default Python interpreter to python 3.7 by adding the lines below to /etc/portage/package.use

*/* PYTHON_TARGETS: python3_7
*/* PYTHON_SINGLE_TARGET: -* python3_7

# For systemd, I added some specific use flags +cryptsetup +homed +pkcs11 +policykit and for python I added sqlite as a dependency.

After this I issued “emerge –ask –verbose –update –deep –newuse @world“.  I commonly check the default USE flags for the profile before going further – it’s stored in /var/db/repos/gentoo/profiles/* and more information here.

Furthermore, when I need to check the current Portage configuration it’s just a “emerge –info“ away.  The Gentoo wiki page also serves as a helpful reference.

For the kernel, I followed the handbook and made sure I had the following functionality:

  • CONFIG_X86_CHECK_BIOS_CORRUPTION=y
  • CONFIG_WATCHDOG=y

I also added the following to my bootflags kernel command line:

  • resume=/dev/root

Next, for the initramfs I opted to use genkernel and I used the following configuration.

genkernel --lvm --mdadm --bcache  --btrfs --e2fsprogs --dmraid --bootloader=grub2 --luks --busybox --install --kernel-config=/usr/src/linux/.config initramfs

For configuring systemd and grub2 you’ll need to add the following to /etc/default/grub

GRUB_CMDLINE_LINUX="init=/lib/systemd/systemd crypt_root=<device> dobtrfs"

Remember to specify the UUID of the real_root in GRUB_DEVICE and to uncomment GRUB_TERMINAL=console if you want to see errors during bootup.

I usually compile two kernels – one custom configured by me and the other by genkernel in case I get stuck.  Also in the modern day and age, there’s no region to configure any framebuffers, so at best I only set the EFI framebuffer and simple framebuffer kernel option.

genkernel --kernel-append-localversion=-custom all
grub-mkconfig -o /boot/grub/grub.cfg

Once finished, I rebooted, selected the new entry in Grub, entered my password for my encrypted root drive, and immediately started emerging Firefox.

Welcome home!

General knowledge and concepts for high-level discussions
22 August 2020

I’ve moved further additions to this page to my /knowledge page.

When discussing complicated topics it can be helpful to have a unified pool of knowledge.  Below is a list of things I try to keep in my conceptual model of the world so I can have fruitful discussions with my peers.

Mathematics

  • Probability distributions and cumulative distribution functions

Sciences

  • Critical exponents near phase transitions: the universal behaviour of physical quantities near continuous phase transitions.
  • Prosaic AI when we reach AGI
    • The idea that we’ll build an AI that doesn’t reveal anything new about humankind or the world.  Similar to the idea of transformative AI.
  • Re-inforcement learning
  • Local hidden variable theory
  • Chaos and non-chaos
    • Dis-order free, localisation
Notes from setting up a disconnected Debian Testing server

I recently set up a new server at-home server that isn’t connected to the Internet. It’s only available on the local network and doesn’t have any connection to the outside world. It’s part of my longer term initiative to have a disconnected household.

Here’s some notes I took while setting this up.

Why?

I have servers scattered across various providers and felt a general anxiety about security. While I do feel confident using online storage solutions like AWS S3 Glacier class + GPG encryption, I feel that instance-based compute services would likely cause me a headache in the future. Storing everything locally would be faster and reduce costs as well.

Updates

After a fresh installation of Debian Testing, I briefly plugged in the Ethernet cable, set my sources.list to point to ftp.uk.debian.org, and downloaded the latest packages and security updates including vim, sudo, and screen.

For future updates, I download the weekly testing DVD image from Debian’s servers and transfer them via SSH. I’m still working on optimising this.

Each weekly testing DVD is archived in storage.

Screen suspend

As this is a laptop, the screen didn’t suspend automatically when it wasn’t in use. I appended consoleblank=60 to my GRUB_CMDLINE_LINUX_DEFAULT entry.

Containers

I successfully migrated my containers from Docker to qemu and systemd-container based deployments. I’ll detail more about this in the future. For each deployment I check that it deploys without problems in another physical machine.

  • GitLab
  • OwnCloud
  • MediaWiki
Rotating expired Nitrokey subkeys used for password store
5 July 2020

I’m currently managing my saved passwords with a mixture of pass and Nitrokey. One of my sub-keys expired so I couldn’t update my passwords. Here’s how I generated new keys (rotated them) with a new expiration date.

You’ll need access to your master key. Most tutorials online will make you generate the key locally, nerf it, and upload it into the Nitrokey. In this case, you’ll need find the original primary signing key before moving forward.

Once you have it in hand, extract the contents to a temporary directory and let’s begin. Don’t forget to set the directory permissions appropriately (chmod 700 ./). We’ll be using gpgh to refer to this new directory we’re using for gpg.

$ alias gpgh="gpg --homedir $(pwd)"

$ gpgh --import user@domain.tld.gpg-private-keys
< enter password >

Trust the keys.
$ gpgh --edit-key user@domain.tld
gpg> trust

< select 5 for maximum trust >
< select y to confim >
< exit to confirm >

Modify expiry date of primary key.
$ gpgh --expert --edit-key user@domain.tld
gpg> expire

< select and confirm a new timeframe >

Generate new subkeys.
gpg> list
sec brainpoolP384r1/DEADBEEFDEADBEE1
created: 2019-XX-XX expires: 2021-XX-XX usage: SC
trust: ultimate validity: ultimate
ssb brainpoolP384r1/DEADBEEFDEADBEEA
created: 2019-XX-XX expired: 2020-XX-XX usage: E
ssb brainpoolP384r1/DEADBEEFDEADBEEB
created: 2019-XX-XX expired: 2020-XX-XX usage: S
ssb brainpoolP384r1/DEADBEEFDEADBEEC
created: 2019-XX-XX expired: 2020-XX-XX usage: A
[ultimate] (1). User Name

Generate new subkeys
gpg> addkey
< select 12 to replace subkey BEEA >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 10 to replace subkey BEEB >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 11 to replace subkey BEEC >
< select A to toggle authenticate capability >
< select S to toggle authenticate capability >
< select Q to finish >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >

Remove the expired subkeys
gpg> key 1
gpg> key 2
gpg> key 3
gpg> delkey

Export private and public keys to prepare for backup.
$ gpgh --armor --export-secret-keys user@domain.tld > user@domain.tld-private-keys-2020-07-05
$ gpgh --armor --export user@domain.tld > user@domain.tld-public-keys-2020-07-05

Generate a new revocation certificate
$ gpgh --gen-revoke user@domain.tld > user@domain.tld.gpg-revocation-certificate-2020-07-05

Encrypt the private keys, public keys, and revocation certificate in a symmetrically encrypted tarball and send to offsite.
$ tar cf ./user@domain.tld-keys-2020-07-05.tar user@domain.tld-*-2020-07-05
$ gpgh --symmetric --cipher-algo aes256 user@domain.tld-keys-2020-07-05.tar
$ rm user@domain.tld-keys-2020-07-05.tar
$ sendoffsite user@domain.tld-keys-2020-07-05.tar.gpg
$ sendoffsite user@domain.tld-public-keys-2020-07-05

Import new subkeys into Nitrokey, replacing existing subkeys
< plug in Nitrokey>

$ gpgh --expert --edit-key user@domain.tld
gpg> key 1
gpg> keytocard

< select 2 for encryption key >
< enter master key password >
< enter admin pin for nitrokey >
gpg> key 1
gpg> key 2
gpg> keytocard

< select 1 for signature key >
< enter master key password >
gpg> key 2
gpg> key 3
gpg> keytocard

< select 3 for authentication key >
< enter master key password >
gpg> save

(Note down the encryption key from the “list” output so you can re-initialise pass later.)

Kill the running GPG agents that might interfere with password caching.
$ gpgconf --kill gpg-agent
$ GNUPGHOME=$(pwd) gpgconf --kill gpg-agent

Confirm those sneaky buggers are gone.
$ ps aux | grep gpg

Migrate your pass store to the new set of keys. You’ll need to do this with both the old and new set of keys accessible so we’ll run this from our temporary directory with the expired sub-keys.

First cache the password of the private key.
$ echo "test message string" | gpgh --encrypt --armor --recipient user@domaind.tld -o encrypted.txt
$ gpgh --decrypt --armor encrypted.txt

Confirm you can decrypt an existing pass key.
$ gpgh --decrypt ~/.password-store/some/key/user@domain.tld.gpg

Backup pass directory
$ cp -R ~/.password-store ~/.password-store_bak

Next migrate the passwords, using the encrypted subkey we listed above.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass init DEADBEEFDEADBEEFD

Create and delete a fake password to confirm it’s working.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass generate fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass edit fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass rm fake/password

Finally, update your local GPG configuration by importing the new public keys. Notice we’re using the normal gpg. You should see 3 new subkeys imported.
$ gpg --import user@domain.tld-public-keys-2020-07-05

Now you can remove the temporary directory you made after confirming you’ve backed up the encrypted backup and also published the public keys somewhere accessible.
$ rm -r $(pwd)
$ cd

More articles I’ve written on this topic:
* Using GPG to master your identity (Part 1)
* Configuring and integrating Nitrokey into your workflow

Alarms as timers
13 May 2020

07:59.

It was an unusually overcast morning and the birds seemed to be reluctantly chirping, upset at the sun apparently missing its promised sun rise time.  While I was running, my alarm went off.  I realised that I was way ahead of my usual self – I had prepped breakfast, almost finished my run, and had solved a couple long standing issues at work; all while watching the clouds mope across the sky.

This made me realise – having the alarm wake you up at a certain time is all wrong.  Rather, it should be viewed as a timer by which you are ready to attack your day.  Let’s be honest to ourselves, who jolts out of bed with their hair already combed, coffee half drank, and laptop furiously churning through the results streaming in from that advanced SQL query you wrote 10 minutes ago?

Next time, I’ll try to have finished my run and already showered and dressed before the alarm goes off.

 

My principles of product management
4 March 2020

[Draft post]

Principles of product managemnet

I’ve been in product management for a while and over the course of the past 8 years I’ve learned a lot about the difficulties that aren’t apparent in the popular posts promoting product management as a career choice.

Namely,
– that the ability to connect with other humans is under-rated…
– that the development team’s well-being is just as important as your own well-being…
– and, that most customers give you bad, and non-actionable feedback…

So from these realisations and my experience, here are my 5 principles of product management.

1) Focus on the customer problem, not the solution.
Too often I see my peers around getting enthralled by some solution that seems to address every customer need. My first question to these budding PMs is: have you defined the customer problem? The second is how to did you define this? More often than not, the series of answers is: Yes, and Gut Feel.

While I do champion gut feel for making some difficult decisions I don’t think it should be ascribed to critical decisions. Rather, focusing on customer data and customer feedback is the reality when deciding on the solution and then moving onto the solution.

2) A/B testing is validation not truth.
Often I find that junior Product Managers want to focus on testing, testing, testing. From experience, the more important portion is crafting a meaningful hypothesis to test rather than getting a metric to be greater than another.

More often than not, if an experiment becomes wins by a wide margin – something is not right. Give credit to the organisation you joined – most of the low hanging fruit has already been eaten. If you have a huge win, it’s highly probably that it’s a fluke. You measured wrong, your hypothesis doesn’t actually address the customer problem, or it was a seasonal occurrence. I encourage you to take a deep look at the data across multiple facets and see if that huge win (which, by the way gets you plenty of kudos) is actually going to improve your product.

3) Failure is not optional.
As a product manager, you should be constantly questioning your own opinions and deductions on how the world works. Your experience, although great (or limited), doesn’t necessarily match how your customers perform. The only way to craft a winning path is to avoid the obstacles along the way and that means making mistakes — fast — and changing direction quickly.

4) You are the blocker
I’ve found that often the PM is the blocker to progress. Why? Because they don’t communicate their vision well enough. Tell the devs exactly what you want — and what you don’t. Work with them to a great compromise. Coordinate with design on your thoughts — and don’t rely on them to eventually iterate into what you want. Check your pulse with user research and quickly incorporate their feedback. Communicate with other product managers what you plan to do so they can adjust as required.

5) Micro pivots are as effective as major movement
Often you’ll instrument something that shows one of your earlier convictions was wrong or even dead wrong. It’s OK to make a small change that fixes that without changing your main product. Not everything needs to be a huge change with tens of other teams involved. Make a judgement call and commit to the changes that you think are necessary to improve the product and be prepared to turn off that feature flag if it doesn’t work and commit it if it does.

These are just a few of the principles that help to guide me when developing a product. Disagree? Agree? Please head over to me on Twitter @plktio or email, see my contact info, and let me know what you think. I’ll publish the best comments on my blog next week!

Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson