I’m a big fan of btrfs as the filesystem of choice not only for backups but also for the root filesystem. However, as I move around with my backups sometimes I want to restore them to different machines and I always hesitate when faced with the prospect of formatting, encrypting, and partitioning, and mounting a new drive. I wanted to document how I do it so I can have some consistency as I move from hard drive to hard drive, SSD to SSD.
In general, when I purchase hard drives I purchased two of the same kind. I don’t usually like to go smaller than 2TB and recently I’ve been harbouring a certain liking for 4TB hard drives when I can get my hands on them. I’m a big fan of Western Digital and also of Seagate, especially at these larger ranges. You can view the WD Red series prices here (UK).
BackBlaze has a great website where they list the failure rates of their hard drives (link) so you can start here to prime your instincts.
This will take a while. Just like onboarding new RAM with memtestx86, I’ll test my hard drives for badblocks with, funnily enough, the
badblocks tool. Arch Linux has a great wiki article about it here. Here are my default steps for new drives.
Run the smartctl self-tests.
smartctl -t short /dev/sda
smartctl -t conveyance /dev/sda # Used to check that the drive wasn’t damaged in transportation
And view the results
smartctl -A /dev/sda
Next, run the
badblocks command. You can speed this up by figuring out the block size of your hard drive.
hdparm -I /dev/sda | grep Sector # Get the block size, usually around 4096
badblocks -b 4096 -ws -o badblocks-dev-sda.txt /dev/sda
From this mailing list post (link), it seems that modern drives will automatically notice a bad block and re-allocate to another location. So, before cursing the wind if you get a bad block, it’s probably valuable to run it again and see if the error heals itself.
At the expense of write speed, you can use
hdparm to turn on the Write-Read-Verify support. This should also result in a lower likelihood of being bitten by writing corrupted data.
Finally, we can use the
wipefs tool to make sure there’s no filesystem signatures remaining.
wipefs -a /dev/sda
As shown in the encrypting section, if you really want to wipe the drive you can open the drive in raw dm-crypt mode and wipe it.
cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks
Usually, there’s no need to do anything fancy here. If the drive is a root drive, I add an EFI partition, a swap partition, a root partition, and leave a small amount of space at the end of the hard drive unpartitioned. The encryption step will be on the root partition (typically /dev/mapper/sda_luks3). If it’s a data drive, I typically encrypt the entire drive and then make a single partition to use for the data.
mkpart "EFI system partition" fat32 1MiB 301MiB
set 1 esp on
mkpart "EFI system partition" btrfs 301MiB -40960MiB
mkpart "Swap space" linux-swap -40959MiB -8192MiB
Depending on if the drive is a root drive or a data drive, the order of the this and the partitioning step can be swapped – especially if you need an EFI partition.
I’ve been using LUKS (Linux Unified Key Setup) for over a decade and I’ve found it rock-solid for hard drive encryption. My settings of choice have evolved over the years and today I use a very modern set of options. Cryptsetup, the administration interface to dmcrypt and LUKS, has a lot of options that let you really (shoot yourself in the foot) customise the configuration. In general, for data drives I use keyfiles instead of passphrases and store an encrypted copy of the keyfile in a different building (virtual or otherwise) than the one in which the drive lives. For root drives, I use a passphrase and store a copy of the LUKS header in a different building.
More recently, the dm-integrity (GitLab; Kernel; presentation) component enables even more assurance that your data is not silently becoming corrupted and I’ve started to enable that as well. It’s noted as experimental so I use a very minor configuration that hasn’t (touch wood) caused problems to date.
One quick thing to note before diving into encryption – it’s important to establish your threat model when using encryption – for me, I’m simply trying to avoid any hiccups that could happen in the event that a drive is stolen.
First, I encrypt the drive using luksFormat. I add a few custom commands for my particular risk tolerance of encryption strength. If speed is a concern, you can first use cryptsetup benchmark and see which algorithms are most performant for your set up. On my machine, argon2id + XTS was the best for reducing number of iterations on a key (if someone is trying to brute force your key) and for performance when reading and writing encrypted data.
Before doing the below, if you’re really paranoid, as per the cryptsetup FAQ (GitLab) you can over-write the hard drive with random data before getting started.
cryptsetup open --type plain -d /dev/urandom /dev/sda sda_luks
dd if=/dev/zero of=/dev/mapper/sda_luks oflag=direct status=progress bs=1M
cryptsetup close sda_luks
cryptsetup --key-size 512 --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda
If it’s a data drive, I’ll generate a key-file and use that.
dd if=/dev/random of=./sda-keyfile bs=512
cryptsetup --key-size 512 --key-file ./sda-keyfile --hash whirlpool --iter-time 5000 --use-random --cipher aes-xts-plain64 --pbkdf-memory=4194304 --pbkdf=argon2id --integrity hmac-sha256 luksFormat --type luks2 /dev/sda
Next, I immediately back up the header and store it somewhere safe.
cryptsetup luksHeaderBackup --header-backup-file /root/sda_header.luksHeader /dev/sda
Now I can open the device and begin the partitioning and setting up the filesystem.
cryptsetup open --type luks2 /dev/sda sda_luks
Creating the filesystem
The final stage of this abstraction sandwich is to configure btrfs. This is my favourite part: 1) because we’re almost done; and 2) because it’s the abstraction level I deal with the most.
A unique component of btrfs is that is presents a subvolume filesystem on top of the filesystem you see at the command line. I usually don’t build the filesystem structure where the root filesystem is at top of the hierarchy, instead I create a tree like so:
/toplevel # the parent of all subvolumes
/toplevel/savestate # this is where snapshots are stored
/toplevel/rootfs # where the rootfs will go
/toplevel/data # where data is stored, in the case of a data drive
One thing I’ve learned from using btrfs is that if you don’t configure quotas, you will rarely know how much space you’ll save by deleting a given snapshots. So when I usually enable that option when configuring the initial structure.
mkfs.btrfs --csum xxhash /dev/mapper/sda_luks
mount -o autodefrag=on,compress=lzo /dev/mapper/sda_luks /toplevel
btrfs quota enable /toplevel
btrfs subvolume create /toplevel/rootfs
btrfs subvolume create /toplevel/rootfs/gentoo
btrfs subvolume create /toplevel/savestate
btrfs subvolume create /toplevel/storage
And there you have it, a very simple but practical way of managing the longevity of your system with a systematic approach to formatting, partitioning, encrypting, and creating filesystems on your hard drives.