logo
Antony Jepson
🤔 About
✍️ Contact
21 Sep 2020
 

Rotating expired Nitrokey subkeys used for password store
5 July 2020

I’m currently managing my saved passwords with a mixture of pass and Nitrokey. One of my sub-keys expired so I couldn’t update my passwords. Here’s how I generated new keys (rotated them) with a new expiration date.

You’ll need access to your master key. Most tutorials online will make you generate the key locally, nerf it, and upload it into the Nitrokey. In this case, you’ll need find the original primary signing key before moving forward.

Once you have it in hand, extract the contents to a temporary directory and let’s begin. Don’t forget to set the directory permissions appropriately (chmod 700 ./). We’ll be using gpgh to refer to this new directory we’re using for gpg.

$ alias gpgh="gpg --homedir $(pwd)"

$ gpgh --import user@domain.tld.gpg-private-keys
< enter password >

Trust the keys.
$ gpgh --edit-key user@domain.tld
gpg> trust

< select 5 for maximum trust >
< select y to confim >
< exit to confirm >

Modify expiry date of primary key.
$ gpgh --expert --edit-key user@domain.tld
gpg> expire

< select and confirm a new timeframe >

Generate new subkeys.
gpg> list
sec brainpoolP384r1/DEADBEEFDEADBEE1
created: 2019-XX-XX expires: 2021-XX-XX usage: SC
trust: ultimate validity: ultimate
ssb brainpoolP384r1/DEADBEEFDEADBEEA
created: 2019-XX-XX expired: 2020-XX-XX usage: E
ssb brainpoolP384r1/DEADBEEFDEADBEEB
created: 2019-XX-XX expired: 2020-XX-XX usage: S
ssb brainpoolP384r1/DEADBEEFDEADBEEC
created: 2019-XX-XX expired: 2020-XX-XX usage: A
[ultimate] (1). User Name

Generate new subkeys
gpg> addkey
< select 12 to replace subkey BEEA >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 10 to replace subkey BEEB >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >
gpg> addkey
< select 11 to replace subkey BEEC >
< select A to toggle authenticate capability >
< select S to toggle authenticate capability >
< select Q to finish >
< select 7 for brainpool p-384 >
< select 6m for six months >
< select y then y to confirm >

Remove the expired subkeys
gpg> key 1
gpg> key 2
gpg> key 3
gpg> delkey

Export private and public keys to prepare for backup.
$ gpgh --armor --export-secret-keys user@domain.tld > user@domain.tld-private-keys-2020-07-05
$ gpgh --armor --export user@domain.tld > user@domain.tld-public-keys-2020-07-05

Generate a new revocation certificate
$ gpgh --gen-revoke user@domain.tld > user@domain.tld.gpg-revocation-certificate-2020-07-05

Encrypt the private keys, public keys, and revocation certificate in a symmetrically encrypted tarball and send to offsite.
$ tar cf ./user@domain.tld-keys-2020-07-05.tar user@domain.tld-*-2020-07-05
$ gpgh --symmetric --cipher-algo aes256 user@domain.tld-keys-2020-07-05.tar
$ rm user@domain.tld-keys-2020-07-05.tar
$ sendoffsite user@domain.tld-keys-2020-07-05.tar.gpg
$ sendoffsite user@domain.tld-public-keys-2020-07-05

Import new subkeys into Nitrokey, replacing existing subkeys
< plug in Nitrokey>

$ gpgh --expert --edit-key user@domain.tld
gpg> key 1
gpg> keytocard

< select 2 for encryption key >
< enter master key password >
< enter admin pin for nitrokey >
gpg> key 1
gpg> key 2
gpg> keytocard

< select 1 for signature key >
< enter master key password >
gpg> key 2
gpg> key 3
gpg> keytocard

< select 3 for authentication key >
< enter master key password >
gpg> save

(Note down the encryption key from the “list” output so you can re-initialise pass later.)

Kill the running GPG agents that might interfere with password caching.
$ gpgconf --kill gpg-agent
$ GNUPGHOME=$(pwd) gpgconf --kill gpg-agent

Confirm those sneaky buggers are gone.
$ ps aux | grep gpg

Migrate your pass store to the new set of keys. You’ll need to do this with both the old and new set of keys accessible so we’ll run this from our temporary directory with the expired sub-keys.

First cache the password of the private key.
$ echo "test message string" | gpgh --encrypt --armor --recipient user@domaind.tld -o encrypted.txt
$ gpgh --decrypt --armor encrypted.txt

Confirm you can decrypt an existing pass key.
$ gpgh --decrypt ~/.password-store/some/key/user@domain.tld.gpg

Backup pass directory
$ cp -R ~/.password-store ~/.password-store_bak

Next migrate the passwords, using the encrypted subkey we listed above.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass init DEADBEEFDEADBEEFD

Create and delete a fake password to confirm it’s working.
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass generate fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass edit fake/password
$ PASSWORD_STORE_GPG_OPTS="--homedir $(pwd)" pass rm fake/password

Finally, update your local GPG configuration by importing the new public keys. Notice we’re using the normal gpg. You should see 3 new subkeys imported.
$ gpg --import user@domain.tld-public-keys-2020-07-05

Now you can remove the temporary directory you made after confirming you’ve backed up the encrypted backup and also published the public keys somewhere accessible.
$ rm -r $(pwd)
$ cd

More articles I’ve written on this topic:
* Using GPG to master your identity (Part 1)
* Configuring and integrating Nitrokey into your workflow

Wireguard + other VPNs managed by NetworkManager
13 February 2020

I’ve been using a mixture of Wireguard and OpenVPN devices in NetworkManager recently.  As mentioned in my prior post, I mark the Wireguard interface as unmanaged and bring up the interface manually with a shell script.  Most *.ovpn files have redirect-gateway def1 which sends all traffic, except the local LAN traffic, over the VPN.  This means that separate LANs you’ve created outside of the VPN will be unaccessible.

There are a couple ways to resolve this depending on how you have the VPN connection configured.  In my case, I have a *.ovpn file that I import into NetworkManager and the routes are included in the file.  It’s common for OpenVPN files to contain a “redirect-gateway def1” clause which causes all network traffic originating from the client to pass through the OpenVPN server.  (Side note: def1 uses 0.0.0.0/1 and 128.0.0.0/1.

To resolve this, I added a new line into my *.ovpn file to reference the existing Wireguard LAN I created.

route WIREGUARD_SUBNET SUBNET_MASK net_gateway

You can also resolve this by adding a new “via” route to the kernel routing table.

Note — if you followed my guide exactly in the prior post then this route will already exist and no change will be needed.

plkt.io
7 June 2019

My plkt.io website is live and built with GatbsyJS and AWS Amplify.  I’m really excited about the possibilities here.  Head on over to read about my plans: plkt.io.

Using Amplify to launch your first Gatsby app
29 May 2019

I’m building the future of this website on plkt.io. I’m using React, Gatsby, and Amplify to build a single page application that’ll be performant, accessible, and modular.
This tutorial will show you how I got started from only a web development background of HTML/CSS/Javascript.

Resources and definitions

React: JavaScript library for building using interfaces using declarative components and reusable components. Created by Facebook.
Gatsby: JavaScript framework for building static web apps. Created by thousands of community members.
Amplify: PaaS that coordinates between backend and frontend across various platforms. Created by Amazon.

Installing software

Get started by installing React followed by Gatsby and Amplify. I’m using Gentoo as my local dev environment so I will use Portage to install React and the npm package manager. If you don’t have git installed you will need that too.
# emerge --ask --verbose net-libs/nodejs dev-vcs/git
# npm install --global gatsby-cli @aws-amplify/cli

Create sample app

Once installed, we’ll use Gatsby to serve a basic page and then enable remote building and deployment using Amplify.
~ $ gatsby new hello-world
~ $ cd hello-world
~/hello-world $ gatsby develop

Now we have a new hello world single page application viewable at localhost:8000. Open the provided address and make sure you can access the web page. If you’re using Visual Studio Code remote development, then forward the port to your local machine.

Deploy to Amazon S3 with Amplify

Next we can configure Amplify as a publishing path. You can use the defaults for most options. Some of the configuration will require you to open the AWS Web Console to create IAM users.

~/hello-world $ amplify configure
~/hello-world $ amplify init

Now we can immediately publish the sample app to an S3 bucket with one command. Run this command and select “DEV (S3 only with HTTP)
~/hello-world $ amplify add hosting

After about 2 – 3 minutes you should see a link to the S3 bucket with your web app deployed.

And that’s it — you now have used the modern JavaScript, API, Mobile stack to deploy a web app. I highly recommend you install VS Code and walk yourself through the tutorials available on the Gatsby and Amplify homepages.
Gatsby tutorial: https://www.gatsbyjs.org/tutorial/part-one/
Amplify tutorial: https://aws-amplify.github.io/docs/js/react

Immediate broken pipe when connecting via SSH on Gentoo under VMWare Fusion

Recently when cloning repositories through GitHub I was facing immediate broken pipe issues.

bnt@gentoo ~ $ ssh git@github.com
packet_write_wait: Connection to 140.82.118.4 port 22: Broken pipe

The fix is to change the IP Quality of Service setting to “throughput”.

~/.ssh/config
Host *
  IPQoS=throughput

This is a known issue and being tracked by a bug on the open-vm-tools GitHub repository [link]. I’m running VMWare Fusion 11.0.3 and open-vm-tools 10.3.10 on Gentoo with kernel 4.19.27 and ssh version 7.9_p1-r4.

Debugging ebuild failure on Gentoo for lxml
22 April 2019

While setting up a fresh installation of Gentoo to deploy to the cloud for remote dev work, I kept running into compilation errors with lxml.

The referenced snippet of code shown was the standard compilation line: “${@}” || die “${die_args[@]}” which means run the command and associated arguments or quit and show the associated arguments.

I went into the build directory to debug further.

# cd /var/tmp/portage/dev-python/lxml-4.3.3/lxml-4.3.3-python2_7/work/lxml-4.3.3-python2_7
# i686-pc-linux-gnu-gcc -O2 -march=native -pipe -fno-strict-aliasing -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -I/usr/include/libxml2 -Isrc -Isrc/lxml/includes -I/usr/include/python2.7 -c src/lxml/etree.c -o /var/tmp/portage/dev-python/lxml-4.3.3/work/lxlm-4.3.3-python2_7/build/temp.linux-x86_64-2.7/src/lxml/etree.o -w

Which compiled without problems.

I emerged screen and then ran emerge –resume and managed to capture the error.

> Unknown psuedo op .str

I found a similar thread on the Gentoo forums and created a bigger swapfile as I was running out of memory.
# fallocate /swapfile -l 1GiB
# chmod 600 !$
# mkswap !$
# swapon !$

This solved the compilation issue.

Git up and going
17 April 2019

Here’s a quick primer I wrote for myself to get git configured on a new computer. All of the options alter your global settings.

Set yourself as the global author

$ git config --global user.name "Antony Jepson"
$ git config --global user.email email@domain.tld

Set your default editor

$ git config --global core.editor vim

Set some abbreviations

$ git config --global alias.st status
$ git config --global alias.ci commit
$ git config --global alias.cim commit -m
$ git config --global alias.br branch
$ git config --global alias.pu push
$ git config --global alias.pl pull

Moving a user cron job to a systemd timer
23 March 2019

I have been using cron for well over a decade to schedule tasks to run when I’m not logged in. However, maintaining logs for these has been difficult and, after using systemd for a while, think I should switch to using a timer.

First, take a look at your current cron jobs.

$ crontab -l
0 * * * * offlineimap

This syncs my email every hour on the hour. For an equivalent systemd implementation we need to write a timer and a service. The timer will call the service at the scheduled time.

# $HOME/.config/systemd/user/offlineimap.service
[Unit]
Description=Local offlineimap service

[Service]
ExecStart=/usr/bin/offlineimap

[Install]
WantedBy=default.target

# $HOME/.config/systemd/user/offlinemap.service.d/01-env.conf
# There is no shell expansion here.
# Don't surround the values with quotes.

[Service]
Environment=XDG_CONFIG_HOME=/home/local/.host/config
Environment=XDG_DATA_HOME=/home/local/.host/data
Environment=XDG_RUNTIME_HOME=/home/local/.host/runtime
Environment=XDG_CACHE_HOME=/home/local/.host/cache

# $HOME/.config/systemd/user/offlineimap.timer
[Unit]
Description=Sync mail using IMAP every hour

[Timer]
OnCalendar=hourly

[Install]
WantedBy=timers.target

Now we can load these new units with “systemctl –user daemon-reload” and start them with “systemctl –user enable offlineimap.timer”.

Further reading: UNIX and Linux System Administration Handbook (5th Edition) [Amazon]

Selectively applying changes with Git Stash
21 January 2019

So you just spent 2 hours fixing a bug and made a bunch of other changes along the way. You took notes on your change and have a good mental model of what you did but you’re too lazy to walk through the changes and commit them individually.

This has happened to me a few times and resulted in some awry looking commits with dangling changes.

Turns out git has your back.

First, create a temporary branch that you’ll use to continue the tree of commits.

$ git branch staging-temp

Next, interactively apply the changes

$ git add -i

You can read the documentation for more tips on interactive mode: https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging.

Sometimes, I prefer to do this the other way around. I instead begin by stashing all the irrelevant changes and then iteratively merging them back into HEAD.

$ git stash save --patch

Don’t forget to merge the temporary branch back into main.

Quickly setting up a local Tiny Tiny RSS instance with Docker
3 January 2019

I’ve been manufacturing scenarios to test Docker lately and one which came to mind is setting up a local RSS reader instance. It took me a while to parse all the necessary config but the result is very satisfying. You’ll need Docker and Docker Compose to run the below.

The below creates two Docker instances, one for the database, one for Apache + PHP upon which Tiny Tiny RSS runs.

This assumes also that you already have something running on the :80 and :443 ports so we’ll forward them to a different port.

Create the volume for the Postgres container.

$ mkdir -p ~/docker/rss
$ cd ~/docker/rss
$ mkdir database_vol

Create the Dockerfile for the web frontend.

# Dockerfile-web
FROM php:7.2-apache
WORKDIR /var/www/html
RUN apt-get update && apt-get install -y libpq-dev git libpng-dev libfreetype6-dev libjpeg62-turbo-dev libxml2-dev
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include --with-jpeg-dir=/usr/include
RUN docker-php-ext-configure xml --with-libxml-dir=/usr/include
RUN docker-php-ext-install pdo_pgsql mbstring json gd xml opcache pgsql intl
RUN docker-php-ext-enable opcache
RUN git clone https://tt-rss.org/git/tt-rss.git ./
RUN chown -R www-data:www-data /var/www/html
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"

Now create the docker compose file to bundle the containers together.

# docker-compose.yml
version: "2"
services:
database:
image: postgres
volumes:
- /var/lib/postgresql
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
networks:
- net-backend

web:
build:
context: ./
dockerfile: Dockerfile-web
networks:
- net-backend
volumes:
- /var/www/html
ports:
- "8080:80"
depends_on:
- database

networks:
net-backend:
driver: bridge

Now we can start the Docker containers and access it at http://<docker host>:8080/install

$ docker-compose -f docker-compose.yml up -d web

In the configuration page, enter the following for the database information:
Database type: PostgreSQL
Username: docker
Password: docker
Database name: docker
Host name: database
Port: 5432

Then copy the provided config into /var/html/www/config.php file.

You’re all set. Want to get started quickly? Clone my git repository that has the files already created.

$ git clone https://github.com/xocite/docker-ex-ttrss
Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson