logo
Jiff Slater
🤔 About
✍️ Contact
📚Knowledge
25 Feb 2021
 

Mounting a local directory in a qemu/kvm virtual machine
28 January 2021

I usually try to run my virtual machines using my existing kernel and this means that any modules I use I have to mount them within the virtual machine. I usually use the 9p mount to achieve this but in some scenarios it’s easier to use the virtual vfat mount type in KVM/QEMU.

With the 9p mount, you can use.

qemu_system_x86_64 -fsdev local,path=/directory,security-model=mapped-xattr,id=9p,readonly -device virtio-9p-pci,mount_tag=9p

And mount it locally with

mount -t 9p -o trans=virtio 9p /guest/directory

You can also use the virtual FAT filesystem.

qemu_system_x86_64 -file=fat:ro:/directory,id=vfat,format=raw,if=none -device usb-storage,drive=vfat

Flashing Mobian to the eMMC from within the PinePhone
17 January 2021

Getting Mobian onto the PinePhone is manageable but the given instructions don’t seem to work on my device. I had to apply to the following patch to get the Python installer to flash succesfully.

Log into the PinePhone over SSH (or use the phone itself, you savage :P).

Download the installer from https://salsa.debian.org/Mobian-team/mobian-installer/-/blob/master/mobian_installer.py

Make the following changes before executing:

  • Remove the references to libparted and py and reference the system libraries instead with import parted and import py.
  • In L307, replace logfn = os.path.join(os.environ["USER_PWD"], "mobian_installer.log") with logfn = os.path.join("/home/mobian", "mobian_installer.log")
Searching and executing for only a set of specific file types by extension
16 January 2021

I recently needed to copy a bunch of videos over a slow performing network and hard disk and didn’t want to copy over extraenous files. I used a combination of rsync and find to make sure that only the files I needed were executed and played.

Sending files over
$ rsync -rv ./ --filter "+ */" --filter "+ *.mp4" --filter "+ *.mkv" --filter "- *" user@host:./

Executing (playing) the files
$ find . -type f -name "*.mkv" -print0 -or -name "*.mp4" -print0 | sort -Rz | xargs -0 -n1 omxplayer -r -o hdmi

Quick break down of the commands.

The -print0 && -0 uses GNU extensions to tell both find and xargs to ignore whitespace and only start a new iteration when there’s a null character (‘\0’). The sort command randomises the output and reads line by line using nul termination ('z'). xargs -n1 means to pass the arguments line by line to the following command.

Scheduling a shutdown with systemd

Occasionally, you’ll make a change in Linux that might be a bit precarious – you commit the change with a hesitation anticipation of a problem or uncertainty on the next reboot. In these cases, it may be desirable to schedule a shutdown if nothing is done within a certain time period.

I do this frequently when I’m testing changes to a Raspberry Pi that doesn’t have an off button. This reduces the likelihood that I need to turn off the device by removing power (a problematic shut-off method that can cause problems with the SD card).

If systemd is available, you can create a new timer that executes after a set amount of time.

We’ll create a systemd unit that executes after the multi-user target has completed.

Create a small script to trigger the auto-reboot.

/usr/local/bin/auto-poweroff.sh
#!/bin/bash
/usr/bin/sleep 60
/sbin/poweroff

/etc/systemd/system/auto-poweroff.service
[Unit]
Description="Automatically power off after a period of time."
Type=oneshot
IgnoreOnIsolate=yes
After=ssh.service

[Service]
ExecStart=/usr/local/bin/auto-poweroff.sh

[Install]
WantedBy=multi-user.target

Finally create the symlink into the multi-user.target directory.

cd /etc/systemd/system/multi-user.target.wants/
ln -s ../auto-poweroff.sh

Enabling SSH + WiFi on Raspberry Pi OS
15 January 2021

Setting up SSH on Raspberry Pi OS is simple – create an empty ssh file in the boot partition root directory and edit the /etc/wpa_supplicant/wpa_supplicant.conf to include the wireless network information.

# touch /mnt/raspberry_boot/ssh
# NETWORK_NAME=somewirelessssid NETWORK_PASSWORD=somewirelesspassword echo -e 'network={\n ssid="$NETWORK_NAME"\n  psk="$NETWORK_PASSWORD"\n  key_mgmt=WPA-PSK\n}' | tee -a /mnt/raspberry_root/etc/wpa_supplicant/wpa_suppliant.conf
# cat /mnt/raspberry_root/etc/wpa_supplicant/wpa_suppliant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="somewirelessssid"
psk="somewirelesspassword"
key_mgmt=WPA-PSK
}
# eject /mnt/raspberry*

Once the Raspberry Pi boots up, you can access it over the data USB cable if you’ve already configured that and also over the wireless network, presuming it can automatically be assigned an IP address.

Remember to enable ssh to run on each boot with systemctl enable ssh or via raspi-config.

Manually setting up host only networking with QEMU guest

By default QEMU makes it very easy to connect a virtual machine to the Internet using the user mode network with -netdev user. However, I’m using a custom configuration that connects my virtual machines to a pfsense instance so I needed to add an extra bridge for host to guest communication.

You can create a local bridge and tap pair on the host by using the iproute2 set of tools.

# ip link add dev bridge00 type bridge
# ip tuntap add tap00 mode tap user $USER group kvm
# ip link set tap00 master bridge00
# ip link set dev bridge00 up
# ip link set dev tap00 up

Now for most use cases, there’s no need to set up some fancy DHCP server on the host to serve a single client so I configure a simple static configuration and add that associated configuration in the guest’s equivalent to rc.local.

(host) # ip addr add 192.168.123.1/24 broadcast 192.168.123.255 dev bridge00

Inside the guest you can give yourself a static IP and communicate to the host like so.

(guest) # ip addr add 192.168.123.2/24 dev ens1

Finally, UP the interface in the guest.

# ip link set dev ens1 up

Now you should be able to SSH or ping the guest on this private network with the virtual machine.

[Note] Playing videos on a Raspberry Pi Zero W

Just a quick note – to play videos on a Raspberry Pi – use omxplayer instead of mplayer for maximum performance.

Quickly instantiating an Arch Linux systemd-nspawn container from Gentoo
6 December 2020

I recently needed to install the beets media organiser on Gentoo and found it needed a lot of packages to be unmasked. Rather than install it directly from pip and opted to install it inside a small Arch Linux container. I consider this a trial run before I move my Firefox installation from the VM I’m authoring this post to a container on the host.

Note that this method of working means you’ll have duplicate packages on your host system but, as they say, space is cheap, right?

Step 1: Acquiring the packages

Go ahead and download the bootstrap image from one of your favourite mirrors. The file name as of writing is ‘archlinux-bootstrap-2020.12.01-x86_64.tar.gz’. Note that I use the xattrs command line option for wget so I can remember from where I got the file. You can view the extended attributes by using `getfattr -d $FILENAME`.

$ wget --xattrs https://$HOST/archlinux/iso/2020.12.01/archlinux-bootstrap-2020.12.01-x86_64.tar.gz

Step 2: Creating the container directories

I always find this a bit of a chicken and egg problem – how do you name the directory in a way that reflects what you’re going to use it for. I’ve general stuck with writing the date of the initial creation as the directory name.

$ mkdir $HOME/containers/archlinux/2020-12-06
$ cd !$

Step 3: Launching the container and copying in the bootstrap directory.

Note that you need to run this as root.

# tar xzpf archlinux-bootstrap-2020.12.01-x86_64.tar.gz --strip-components=1

Step 4: Selecting a nearby mirror

Edit the mirrorlist file and uncomment the appropriate mirror.

# vim etc/pacman.d/mirrorlist

Step 5: Launching the container

This is where the magic of systemd-nspawn comes in. You can simply run the following (as root unfortunately)…

# systemd-nspawn -D ~user/containers/archlinux/2020-12-06
A short guide to setting up Matrix on Linux
22 November 2020

I’ve been really fascinated by Matrix lately – it’s a set of APIs that makes it super easy to have decentralised chat rooms – even across disparate services like WhatsApp, Discord, Telegram, and Mattermost.  I wanted to set it up and see the performance on a VM running Docker.  Here’s how I configured the connection in a few simple steps.

Provisioning the VM

First I provisioned the Virtual Machine that would run the Synapse container.

Download Debian, verify signature, and set up a KVM in headless mode with port forwarding.

cd ~
mkdir iso hdd
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.6.0-amd64-netinst.iso -O iso/debian-10.6.0-amd64-netinst.iso
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/SHA1SUMS -O iso/SHA1SUMS
wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/SHA1SUMS.sign -O iso/SHA1SUMS.sign
gpg SHA1SUMS.sign
gpg --keyserver keyring.debian.org --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B
gpg --verify SHA1SUMS.sign SHA1SUMS
sha1sum iso/debian-10.6.0-amd64-netinst.iso
qemu-img create -f qcow2 hdd/debian-docker-root.qcow2 64G -o nocow=on
qemu-system-x86_64 -m 8G -enable-kvm -cpu host -sandbox on -smp 2 -name 'docker-host' -hda $HOME/hdd/debian-docker-root.qcow2 -cdrom $HOME/iso/debian-10.6.0-amd64-netinst.iso -netdev user,id=net0 -device e1000,netdev=net0

Run through the installation process.  I set 8GB of RAM so the installer automatically creates a reasonable swap partition.  I usually prefer to go through the process manually and then snapshot the rootfs.  I didn’t see any advantage of using OVMF UEFI firmware for this demo.

I set my hostname to ‘dkr’ and domain name to redshift7.  I disabled root by not entering a password.  For partitioning, I used the entire disk with all files in a single partition with a ext4 filesystem.  I created a 4G swap partition.

Next, I created a snapshot of the root filesystem so I could have a base image for the virtual machine.

qemu-img create -f qcow2 -b hdd/debian-docker-root.qcow2 debian-docker-root-s1.qcow2 -o nocow=on

I launched the VM again, this time with the new derivative image.  I also exposed a port for SSH to work on the external interface so I could install docker using an Ansible playbook.

qemu-system-x86_64 -m 8G -enable-kvm -cpu host -sandbox on -smp 2 -name 'docker-host' -hda $HOME/hdd/debian-docker-root-s1.qcow2 -netdev user,id=net0,hostfwd=tcp::20022-:22 -device e1000,netdev=net0

Next, I installed Docker using the below lines.

sudo apt update
sudo apt install git apt-transport-https ca-certificates wget software-properties-common gnupg2 curl python-apt
sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
sudo apt update
sudo apt install docker-ce
sudo gpasswd -a local docker
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl start containerd
sudo systemctl enable containerd
sudo wget https://github.com/docker/compose/releases/download/1.27.4/docker-compose-Linux-x86_64 -o /usr/bin/local/docker-compose
sha256sum /usr/bin/docker-compose # check for 04216d65ce0cd3c27223eab035abfeb20a8bef20259398e3b9d9aa8de633286d
sudo chmod a+rx /usr/local/bin/docker-compose

Next I followed the excellent Synapse guide for getting the basic server configured.  I chose a server name population3 and the storage as SQLite.  I also changed the default system version of Python to Python 3.

sudo apt install virtualenv build-essential python3-dev libffi-dev python3-pip python3-setuptools sqlite3 libssl-dev virtualenv libjpeg-dev libxslt1-dev
mkdir ~/synapse
virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse # this command shows some crazy graphs
cd ~/synapse
python -m synapse.app.homeserver --server-name localhost --config-path homeserver.yaml --generate-config --report-stats=no
synctl start

Next, I edited the produced homeserver.yaml file to make it a bit easier to debug the initial setup.

server_name: “localhost:8008”

As I’m running this in a virtual machine, I opened the QEMU monitor and added an additional port for redirection. You can check existing port forwards in the monitor by using “info network“ and “info usernet“ and edit them by using “hostfwd_add tcp::8008-:8008“.

Next you should be able to access your instance via http://localhost:8008

Pending sections

  • Isolating instance from the federation.
  • Own identity server.
  • Give /dev/urandom so initialisation using host random.
The pervasiveness of microabrasions
16 November 2020

**DRAFT – Last Updated 2020-11-16**

The human body has multiple mechanisms to stay adaptable in the ever-changing present.  At the cellular level, there’s an evolving protection against the constant marching battery of micro-organisms.  This protection keeps you alive.  At the macro level, there’s your own personal drive to survive.  These two halves come together to form a cohesive whole – you.

However, these adaptions only work to keep you alive, in an acceptable homeostasis of which the bar is rather low.  Fed regularly?  Check.  Have a shelter?  Check.  Got a job?  Check.  Feel fulfilled?  …  There’s a class of constant onslaughts that we’re not well-adapted for – or you could say we’ve adapted to them inadequently – maladaptations.

These minor antagonists can’t usually be used to propel you into a better place, rather, they’re constantly sanding away your drive to become fulfilled.  Daily unpleasantries you dismiss or accept; an over-extended caffeine habit; a day-ending nightcap; a small lack in regularly activity; less sleep than usual; or a chair that isn’t a good fit.  Truth is, most of our day-to-day activity is structured around the same interactions with the world around us in order to nurse a homeostatic environment.

Due to our maladaptions, we either deny (recognise and reject) or accept (merge with our definition of the world) the things that slowly erode the internal concept of fulfillment – day after day, the concept of a satisfactory life reduces to the idea of a simple, monotonous, manual life of labour where the rewards are self-evident (no introspection or reflection required) and real.

While the idea of disconnecting from it all may sound pleasant, it would onyl be a travesty of the modern life that we’re capable of.  We’re Thinkers, constantly creating abstractions out of language out of language out of language.  Modern computer programming is all about collaboratively editing a shared mental model or map of how a computer works at a specific level of abstraction.  There’s no emotion here even though it’s methological and expressive.  But at the end of the day, every computer engineering knows that a compiler (also an abstraction) will take the inputs and optimise them away for better execution in the more accurate computer model.

For modern language – the maxims of “if it bleeds, it reads” and “emotion deserves promotion” seem to be interwoven into all common watering holes.  This modern way of writing seems to cut through carefully built abstractions which help to promote a shared understanding of the world and antagonise people into a reactive state.  There’s no value here: no deeper meaning, no verbalisation that can define (label) the angst.

Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson