There’s something insanely refreshing about a brand new Linux install. I recently retired my Fortnite gaming rig and turned it into an Docker/CUDA server for running AI/ML on my two NVidia GPUs. While Fortnite will certainly be missed, there’s much more utility in having a bespoke local Linux server.
Here are the typical things I do upon setting up a fresh bare-metal headless machine.
Schedule a daily wake up
It’s easy to take for granted that you have physical access to the server. But what happens if you shutdown the computer by accident or there’s a power outage? I immediately make sure I have RTC wake up enabled in the BIOS and schedule a daily wake up if the option is available.
I also schedule a daily wake up in the OS as well for added convenience. Unfortunately, if you actually wanted to keep the server off and disconnected your best bet would be to mangle the boot config until you have access again.
Set up the back up storage
All my local servers have an additional drive, either SSD or HDD, for backups. My preference is to have an ext4 mirrored RAID mounted at either /data or /srv. If I set up a *BSD instance, then I’ll opt for ZFS if I can.
Backups are not as automated as I’d like currently as I’m still fine tuning the manual process prior to automating. On Linux, I’ll do a double backup. First I create an LVM logical volume snapshot of the root partition. I then take an image of the ext4 filesystem using e2image, and place that on the additional drive. Here’s a sample invocation:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root vg -wi-ao----
It's possible to combine the last command like so - although this seems error prone unless automated in a script.
# mktemp -d
# mksquashfs /tmp/tmp.FM0xNQ0v4r /srv/root-snapshot-20190104.squashfs -p "root-snapshot-20190104.img f 444 root root e2image -r -a /dev/vg/root-snapshot-20190104 -"
For the second back up, I'll just tar up the file system, ignoring /dev and mounted filesystems.
# mktemp -d
# mount /dev/vg/root-snapshot-20190104 /tmp/tmp.1cu7jszuVl
# echo "'root-snapshot-20190104.img': snapshot of system after initial install of Ubuntu 18.04 with OpenSSH server enabled by default" >> /srv/backup-history.log
# tar cf /srv/root-snapshot-20190104.tar --one-file-system /tmp/tmp.1cu7jszuVl
# unmount /tmp/tmp.1cu7jszuVl
And finally I'll remove the snapshot and continue as if nothing happened :).
# lvremove /dev/vg/root-snapshot-20190104
Set the hostname
IP addresses are rather annoying to remember on a DHCP home network. They change frequently and are generally not what you want to use for a custom network. There's two solutions to this, either a) reference the computers by their MAC address; or b) use multicast DNS to reference them by hostname. I always opt for the latter. On macOS, this works out the box. On Ubuntu, you need to install avahi. Normally this automatically configures your nsswitch file but in the event that's not done you'll need to change the hosts line.
$ sudo apt install libnss-mdns
hosts: files mdns4_minimal [NOTFOUND=return] dns
Now you can ping your server hostname.local.
Lock down the installation
Usually the only optional component I add is the OpenSSH Server. I generate keys on my local computer and transfer them over using ssh-copy-id. Then I disable root account log in and password authentication.
If you don't feel like you're being plugged into the matrix everytime you open the terminal, you're doing something wrong. The Terminal should feel like the defacto interface to your servers -- at least until something better comes along.