Jiff Slater
🤔 About
✍️ Contact
28 Jul 2021

My favourite tweaks for Firefox
8 July 2021

I’ve been using Firefox for the good part of 15 years and over the years I’ve collected a few particular preferences for how I use the application. I’ve grouped them below and tried where possible to make them applicable to the latest available versions.


In this file you can configure modifications to the stylesheet used to render the Firefox UI. It’s stored in your profile directory in a directory called chrome.

Before you create the file, go ahead and enable support for the file in about:config by setting the toolkit.legacyUserProfileCustomizations.stylesheets boolean preference to true.

In this file, I’ve configured the following:

@namespace url("http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"); /* only needed once */

/* Pulled from https://support.mozilla.org/de/questions/1239330 and sets the minimum width of pinned tabs */
.tabbrowser-tab[pinned] {width: 45px !important;}
.tab-label-container[pinned] {visibility: hidden !important;}

/* Pulled from https://github.com/piroor/treestyletab/wiki/Code-snippets-for-custom-style-rules#for-userchromecss and hides the top tab bar */
#tabbrowser-tabs {
visibility: collapse !important;

/* Pulled from above website. This removes the large sidebar header specifically for Tree Style Tabs */
#sidebar-box[sidebarcommand="treestyletab_piro_sakura_ne_jp-sidebar-action"] #sidebar-header {
display: none;

/* Make that tab listing a bit smaller */
#sidebar {
min-width: 100px !important;

Company density

I took this one straight from userChrome.org so you can follow there. In a nutshell, you can edit the browser.uidensity setting in about:config and set it to one of the following options:

Along with the above (while I can) I’ve turned off the new proton UI by setting the browser.proton.enabled to false. I imagine at some point this won’t work, so I’ve also configured the back up option which reduces the spacing between menu items by setting browser.proton.contextmenus.enabled to false.

Further settings in about:config.

  • browser.tabs.closeTabByDblclick - true - close tabs by double clicking on them
  • browser.tabs.insertAfterCurrent - true - always open a tab to the right of the active tab
  • browser.tabs.tabMinWidth - 94 - keep the minimum tab width to 94
  • widget.content.allow-gtk-dark-theme - true - enable custom gtk dark themes
Setting up Kanboard for local project management
13 May 2021

I’ve used Trello quite frequently in the past to manage my work items but wanted to move to a self-hosted version that didn’t data mine my every action or offer to integrate with various other services and companies that distract me from the work.  I came across Kanboard which is an open-source selfhosted kanboard software. In the installation instructions Docker was recommended but I opted to use the default server software that came with Debian.

Here’s how I did it.

Installing and configuring Apache, Postgresql, and PHP

First, we’ll configure the LAPP stack.

We’ll install the services and related PHP extensions.
# apt install apache2 php7.3 php-pgsql libapache2-mod-php
# apt install php-common php-xml php-gd php-mbstring php-json php-common php-zip php-fpm

Check that Apache2 started succesfully.  You should be able to view the normal configuration at http://localhost.
# systemctl status apache2

Next, we’ll enable PHP-FPM to have a separate user for each web site. FPM is the process manager for PHP.

Before we get started we’ll need to create a separate, isolated user for FPM.
# adduser --no-create-home --uid 5000 --disabled-password fpm-kanban

Start by creating a new pool.
# cd /etc/php/7.3/fpm/
# vim pool.d/kanban.conf
; Kanban pool
user = fpm-kanban
group = fpm-kanban
listen.owner = www-data
listen.group = www-data
listen = /run/php/php7.3-fpm-kanban.sock
pm.max_children = 100
pm = ondemand

Enable proxy and FPM support in Apache. Restart Apache afterwards and make sure FPM is running.
# a2enconf php7.3-fpm
# a2enmod proxy
# a2enmod proxy_fcgi
# systemctl restart apache2
# systemctl restart php7.3-fpm

Edit the php.ini configuration and turn off allow_url_fopen to reduce the attack surface of Kanboard.
# vim /etc/php/7.3/fpm/php.ini
allow_url_fopen = Off

Extracting and starting Kanboard

Configure the permissions for /var/www.
# chown -R root:www-data /var/www

Make a directory for kanboard. Apache will access the directory with www-data permissions but will send PHP connections to PHP-FPM which will run the scripts under the fpm-kanban user.
# mkdir /var/www/kanboard
# chown -R www-data:www-data /var/www/kanboard

In the Apache configuration directory create a new site.
# cd /etc/apache2
# a2dissite 000-default.conf
# touch ./sites-available/001-kanboard.conf
# vim !$

<VirtualHost *:80>
ServerName kanban.topology.aves
ServerAdmin webmaster@localhost
ErrorLog ${APACHE_LOG_DIR}/kanboard_error.log
CustomLog ${APACHE_LOG_DIR}/kanboard_access.log combined
DocumentRoot /var/www/kanboard
<Directory /var/www/kanboard>
Options -Indexes
AllowOverride All
Require all granted

<FilesMatch \.php$>
SetHandler "proxy:unix:/run/php/php7.3-fpm-kanban.sock|fcgi://localhost"


Then we’ll test that the new site is working.

# cp /var/www/html/index.html /var/www/kanboard/
# chown www-data: /var/www/kanboard/index.html
# a2ensite 001-kanboard.conf
# systemctl reload apache2

Add the relevant information to your DNS resolver. I’m using Unbound.  This will redirect all subdomains under hostname.tld to hostname.tld.  Take note of the IP address used.
local-zone: "hostname.tld" redirect
local-data: "hostname.tld 86400 IN A"

Visit the site and make sure it’s presented correctly at http://kanboard.hostname.tld

Next, we’ll test that PHP is working as intended by creating a sample PHP file to make sure the configuration is working.

echo '<!--?php phpinfo() ?-->' > /var/www/kanboard/index.php &&

Navigate again http://kanboard.hostname.tld and you should see the PHP debug information.

Next we’ll create a database in Postgresql to store the data. First make sure the database is online.
# systemctl status postgresql
# su -l postgres
# psql
# > create database kanboard;
# > create user kanban with login password 'kpassword';
# > \du # checks if user has been created
# > grant all privileges on database kanboard to kanban;
# > \l # lists the databases

Finally, we’ll extract Kanboard and set up the databases.

Download the latest and extract to the right directory.
# wget https://github.com/kanboard/kanboard/archive/refs/tags/v1.2.19.tar.gz
# tar xzf v1.2.19.tar.gz --strip-components=1 -C ./
# chown -R www-data: ./

Finally, configure database access for Kanboard in the config.php file.
# mv /var/www/kanboard/config_default.php /var/www/kanboard/config.php
# vim /var/www/kanboard/config.php

define('DB_DRIVER', 'postgres');

// Mysql parameters
define('DB_USERNAME', 'kanban');
define('DB_PASSWORD', 'kpassword');
define('DB_HOSTNAME', 'localhost');
define('DB_NAME', 'kanboard');

Now, navigate to http://kanban.hostname.tld and login with the default credentials of admin/admin

I’ll write another post at a later date explaining how I use Kanboard to manage my day-to-days! Enjoy!

Wireguard + other VPNs managed by NetworkManager
13 February 2020

I’ve been using a mixture of Wireguard and OpenVPN devices in NetworkManager recently.  As mentioned in my prior post, I mark the Wireguard interface as unmanaged and bring up the interface manually with a shell script.  Most *.ovpn files have redirect-gateway def1 which sends all traffic, except the local LAN traffic, over the VPN.  This means that separate LANs you’ve created outside of the VPN will be unaccessible.

There are a couple ways to resolve this depending on how you have the VPN connection configured.  In my case, I have a *.ovpn file that I import into NetworkManager and the routes are included in the file.  It’s common for OpenVPN files to contain a “redirect-gateway def1” clause which causes all network traffic originating from the client to pass through the OpenVPN server.  (Side note: def1 uses and

To resolve this, I added a new line into my *.ovpn file to reference the existing Wireguard LAN I created.


You can also resolve this by adding a new “via” route to the kernel routing table.

Note — if you followed my guide exactly in the prior post then this route will already exist and no change will be needed.

7 June 2019

My plkt.io website is live and built with GatbsyJS and AWS Amplify.  I’m really excited about the possibilities here.  Head on over to read about my plans: plkt.io.

Using Amplify to launch your first Gatsby app
29 May 2019

I’m building the future of this website on plkt.io. I’m using React, Gatsby, and Amplify to build a single page application that’ll be performant, accessible, and modular.
This tutorial will show you how I got started from only a web development background of HTML/CSS/Javascript.

Resources and definitions

React: JavaScript library for building using interfaces using declarative components and reusable components. Created by Facebook.
Gatsby: JavaScript framework for building static web apps. Created by thousands of community members.
Amplify: PaaS that coordinates between backend and frontend across various platforms. Created by Amazon.

Installing software

Get started by installing React followed by Gatsby and Amplify. I’m using Gentoo as my local dev environment so I will use Portage to install React and the npm package manager. If you don’t have git installed you will need that too.
# emerge --ask --verbose net-libs/nodejs dev-vcs/git
# npm install --global gatsby-cli @aws-amplify/cli

Create sample app

Once installed, we’ll use Gatsby to serve a basic page and then enable remote building and deployment using Amplify.
~ $ gatsby new hello-world
~ $ cd hello-world
~/hello-world $ gatsby develop

Now we have a new hello world single page application viewable at localhost:8000. Open the provided address and make sure you can access the web page. If you’re using Visual Studio Code remote development, then forward the port to your local machine.

Deploy to Amazon S3 with Amplify

Next we can configure Amplify as a publishing path. You can use the defaults for most options. Some of the configuration will require you to open the AWS Web Console to create IAM users.

~/hello-world $ amplify configure
~/hello-world $ amplify init

Now we can immediately publish the sample app to an S3 bucket with one command. Run this command and select “DEV (S3 only with HTTP)
~/hello-world $ amplify add hosting

After about 2 – 3 minutes you should see a link to the S3 bucket with your web app deployed.

And that’s it — you now have used the modern JavaScript, API, Mobile stack to deploy a web app. I highly recommend you install VS Code and walk yourself through the tutorials available on the Gatsby and Amplify homepages.
Gatsby tutorial: https://www.gatsbyjs.org/tutorial/part-one/
Amplify tutorial: https://aws-amplify.github.io/docs/js/react

Immediate broken pipe when connecting via SSH on Gentoo under VMWare Fusion

Recently when cloning repositories through GitHub I was facing immediate broken pipe issues.

bnt@gentoo ~ $ ssh git@github.com
packet_write_wait: Connection to port 22: Broken pipe

The fix is to change the IP Quality of Service setting to “throughput”.

Host *

This is a known issue and being tracked by a bug on the open-vm-tools GitHub repository [link]. I’m running VMWare Fusion 11.0.3 and open-vm-tools 10.3.10 on Gentoo with kernel 4.19.27 and ssh version 7.9_p1-r4.

Quickly setting up a local Tiny Tiny RSS instance with Docker
3 January 2019

I’ve been manufacturing scenarios to test Docker lately and one which came to mind is setting up a local RSS reader instance. It took me a while to parse all the necessary config but the result is very satisfying. You’ll need Docker and Docker Compose to run the below.

The below creates two Docker instances, one for the database, one for Apache + PHP upon which Tiny Tiny RSS runs.

This assumes also that you already have something running on the :80 and :443 ports so we’ll forward them to a different port.

Create the volume for the Postgres container.

$ mkdir -p ~/docker/rss
$ cd ~/docker/rss
$ mkdir database_vol

Create the Dockerfile for the web frontend.

# Dockerfile-web
FROM php:7.2-apache
WORKDIR /var/www/html
RUN apt-get update && apt-get install -y libpq-dev git libpng-dev libfreetype6-dev libjpeg62-turbo-dev libxml2-dev
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include --with-jpeg-dir=/usr/include
RUN docker-php-ext-configure xml --with-libxml-dir=/usr/include
RUN docker-php-ext-install pdo_pgsql mbstring json gd xml opcache pgsql intl
RUN docker-php-ext-enable opcache
RUN git clone https://tt-rss.org/git/tt-rss.git ./
RUN chown -R www-data:www-data /var/www/html
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"

Now create the docker compose file to bundle the containers together.

# docker-compose.yml
version: "2"
image: postgres
- /var/lib/postgresql
- net-backend

context: ./
dockerfile: Dockerfile-web
- net-backend
- /var/www/html
- "8080:80"
- database

driver: bridge

Now we can start the Docker containers and access it at http://<docker host>:8080/install

$ docker-compose -f docker-compose.yml up -d web

In the configuration page, enter the following for the database information:
Database type: PostgreSQL
Username: docker
Password: docker
Database name: docker
Host name: database
Port: 5432

Then copy the provided config into /var/html/www/config.php file.

You’re all set. Want to get started quickly? Clone my git repository that has the files already created.

$ git clone https://github.com/xocite/docker-ex-ttrss
Setting up a Docker Pi-hole DNS server for wired and wireless clients
21 November 2018

Pi-Hole is a DNS resolver that prevents the resolution of common ad-hosting networks. I have a server in my household that I wanted to run as a Pi-hole server for both Ethernet and wireless clients. Here’s how I did it. Keep in mind that when changing the network configuration it’s wise to do it incrementally and test each step to avoid making a mistake and not being able to troubleshoot. In addition, Pi-hole was originally designed to be the only thing installed on a Raspberry Pi so to make the configuration less invasive on my existing system, I’ll be using the official Docker container. For a much simpler installation, go ahead and run the curl | bash command on their home page.

Network topology

You’ll need to get a good idea of your current network topology before continuing. In my case, I wanted to let this be opt-in for other clients on the network because I didn’t want to cache other people’s DNS requests. This means I wouldn’t alter the DNS settings on the router.
First, I mapped out my current network topology. This is pretty easy to do if you just trace the cables in the house. Your set up will probably match mine:

  • WAN from your internet provider connects to to a DOCSIS modem.
    • This modem provides WiFi (normally 802.11ac) to your IoT devices, mobile phones, and other connected devices.
    • It may also be connected to a wireless repeater to resolve deadspots in the house.
      It also provides wired Ethernet.
  • This wired Ethernet may be connected to a switch to reduce cables across the home.
  • It may optionally have telephone ports for VoIP.

A simpler home set up might only have wireless clients.

My configuration mirrors the above and my server is connected to the switch mentioned. Next step is to look at the current configuration according to your devices. You’ll need to gather the interface settings for your router and your server.

In my case,

  • Router
    • Connected to: WAN from internet provider
    • IP address:
    • DHCP settings: to 192.168.254, subnet mask:
    • Built in DNS server available on: and
  • Server
    • Connected to: switch, which is connected to modem
    • IP address (Ethernet):
    • IP address (Wireless): not configuredDHCP settings: same as router

With this in mind, we want to configure the server to act as a wireless hotspot for the Ethernet connection while also providing DNS for both wireless and wired clients. Fortunately, this is pretty simple to do, once you know which apps and files are needed.
This guide uses Debian 9 and NetworkManager.
First, we’ll configure the wireless access point and make sure clients can connect. Look at your current configuration:

$ nmcli
eno1: connected to Wired connection 1
"Intel Ethernet Connection I217-LM"
ethernet (e1000e), AA:BB:CC:DD:EE:FF, hw, mtu 1500
ip4 default

wlp3s0: disconnected
"Intel Wireless"
wifi (iwlwifi), AA:BB:CC:DD:EE:FF, hw

lo: unmanaged
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
interface: eno1
Next, create a wireless hotspot, confirm you can connect, and then delete it.
$ sudo nmcli --show-secrets dev wifi hotspot
Hotspot password: xMNUYLGH
Device 'wlp3s0' successfully activated with '95f843c0-18b4-4133-a27f-9d3eb12be8e7'.
[.. connect to the device ..]
$ sudo nmcli connection down uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7
$ sudo nmcli connection delete uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7

Now that we’re certain we can create a hotspot we can configure it to our preferences.

Pi-hole with Docker

Installing Docker is relatively simple. We’ll enable the HTTPS functionality for their repository and then download the Community Edition of Docker.

$ sudo apt install gnupg2 curl ca-certificates apt-transport-https software-properties-common

Install their GPG key. You can verify the fingerprint by comparing the output from the below command with their official documentation [link]. Last time I checked, the fingerprint’s last 8 characters were: 0x0EBFCD88.

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Next, enable the stable repository for this release. In my case I’m using Debian Stretch.

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \

Finally, download Docker.

$ sudo apt update
$ sudo apt install docker-ce

Confirm that it works.

$ sudo docker run hello-world

If this works, add yourself to the Docker group and log out and then log in.

$ sudo usermod -aG docker `whoami`

Now we can launch the Pi-hole Docker container and configure it to act as a DNS server. We’ll use the following configuration settings.

  • Host mode: meaning that container’s network stack is shared with the host. This will be necessary when exposing ports 53 for DNS and 80 for the web interface, and 443 for SSL ads.
  • DNS from Cloudflare:
  • Environmental variables
  • ServerIP=; the IP of the server on the local network
$ docker pull pihole/pihole
$ mkdir -p ~/local/docker/pihole/pihole/etc/{pihole,dnsmasq.d}
$ docker run \
--name pihole \
-p 80:80 \
-p 53:53/tcp \
-p 53:53/udp \
-p 443:443/tcp \
-p 443:443/udp \
-v ~/local/docker/pihole.pihole/etc/pihole:/etc/pihole \
-v ~/local/docker/pihole.pihole/etc/dnsmasq.d:/etc/dnsmasq.d \
--dns= \
--dns= \
-e ServerIP= \
-e IPv6=False \
-e DNS1= \
-e DNS1= \
-e WEBPASSWORD=password \

If you get some sort of error such as “Couldn’t bind to :80 because already in use”, correct the error, delete the container, and try again.

$ sudo systemctl stop apache2
$ sudo systemctl disable apache2
$ docker container list -a
$ docker container rm <container>

Now finally, connect to your container by navigating to http://<server_ip> on a different computer.

You can also check that your container has network access by:

$ docker container exec pihole ping www.google.com

Now the Docker container is up and running, go ahead and change the settings on your wired interface to use the IP address of your server as the DNS address.

For wireless clients, we’ll go ahead and configure the hotspot again, this time setting the DNS to use our server. Notice that due to installing Docker our networking configuration has changed.

$ sudo nmcli
docker0: connected to docker0
bridge, 02:42:FB:FA:35:DE, sw, mtu 1500
inet6 fe80::42:fbff:fefa:35de/64

veth9259d68: unmanaged
ethernet (veth), 72:FD:6C:AD:CE:D9, sw, mtu 1500

DNS configuration:
interface: eno1

Now we have two more interfaces: docker0 and veth9259d68. Unfortunately, on my end when I create the hotspot, clients aren’t issued an IP address. Let’s debug NetworkManager and see what routes are being created.

Create the hotspot with nmcli

$ sudo nmcli --show-secrets dev wifi hotspot

Now, we’ll use the lower level networking tools to see what’s happening.

$ ip r
default via dev eno1 proto static metric 100 dev wlp3s0 proto kernel scope link src metric 600 dev eno1 scope link metric 1000 dev docker0 proto kernel scope link src dev eno1 proto kernel scope link src metric 100

Next, let’s look at the configuration file NetworkManager creates for the hotspot.

$ cat /etc/NetworkManager/system-connections/Hotspot





Here, the culprit is the [ipv4] method=shared line. In the nm-setting-ip4-config.c file, we can see the following description for this setting.

* NetworkManager supports 5 values for the #NMSettingIPConfig:method property
* for IPv4. If “auto” is specified then the appropriate automatic method
* (DHCP, PPP, etc) is used for the interface and most other properties can be
* left unset. If “link-local” is specified, then a link-local address in the
* 169.254/16 range will be assigned to the interface. If “manual” is
* specified, static IP addressing is used and at least one IP address must be
* given in the “addresses” property. If “shared” is specified (indicating that
* this connection will provide network access to other computers) then the
* interface is assigned an address in the 10.42.x.1/24 range and a DHCP and
* forwarding DNS server are started, and the interface is NAT-ed to the current
* default network connection. “disabled” means IPv4 will not be used on this
* connection.

So from this description, it seems like the problem is the DHCP and forwarding DNS server aren’t starting correctly. Let’s look at the NetworkManager logs and see if anything is awry. We’ll also stop the Pi-hole container to avoid any other issues.

$ docker stop pihole
$ sudo journalctl -u NetworkManager --since "1 hour ago"

Walking through the logs is quite enlightening. (1) We see that NetworkManager creates IPtables entries for the interface, including to forward DNS and DHCP ports to the local DNSmasq instance. (2) We see that dnsmasq-manager failed to create a listening socket due to the address already in use by the Docker container.

Now – before rushing ahead and trying to fix this, it’s important to restate what we’re trying to accomplish here. Approaching the problem with the mindset of “how do I fix this” is wrong and will lead you down a DuckDuckGo / StackOverflow rabbit hole. In this scenario, we’re trying to issue an IP address to clients on the wlp3s0 interface. In addition, we want these clients to use the server as the DNS server so their DNS requests go through the Pi-hole Docker container.

Modify the default settings for shared IP interfaces.

$ sudo vim /etc/NetworkManager/dnsmasq-shared.d/default.conf
# Disable local DNS server

# Use Pi-hole for DNS requests

Now try restarting the docker container and the wireless hotspot. Check the log for errors.

$ docker start pihole
$ sudo nmcli --show-secrets dev wifi hotspot
$ sudo journalctl --since "1 minute ago" -u NetworkManager

No errors should be seen. Connect via your wireless device and confirm that new blocked entries are being inserted into the Pi-hole dashboard by going to your server IP address.

So in summary, we set up Pi-hole on Docker in Debian Stretch to block common adhosting networks for both wired and wireless clients on our home network. For me, this was a good test scenario to become more familiar with Docker.

Overall, I think that host based ad-blocking won’t be effective much longer as more and more content gets bundled with ads behind content delivery networks. The best practice regarding ads, in my opinion, is to only visit sites with acceptable ad practices. This means no pop-overs/pop-unders or stealing focus as well as not tracking you incessantly across the web. I suspect that ad-blocking has and will continue to move client-side. A simple way to avoid the most nefarious of ads is to use the Mozilla multi-container extension which lets you separate your online life into separate entities.










Running a Bitcoin node
1 January 2018

Setting up a Bitcoin node can be a bit daunting, especially considering the amount of disc space required and that the node needs to be always connected. However, once configured maintenance can be relatively hands-off. For more information about the minimum requirements please see here.

This tutorial will be split into two stages. One: configuring the server itself to be relatively secure and resilient against basic attacks and two: configuring the Bitcoin daemon on the server.

Stage one: securing the server

Let’s get the system up to date and then configure the stateful firewall.

# yum upgrade
# yum install vim iptables-service

And we’ll move SSH to a different port so we can reduce the number of login attempts considerably. As this is CentOS, SELinux will need to be informed of the change to allow the SSH daemon to bind to the new port.

# vim /etc/ssh/sshd_config
Set Port to 1234 or something non-standard
# semanage port -a -t ssh_port_t -p tcp 1234 
# systemctl reload sshd

And log back in using the new port to take a look at the network interfaces.

[user@local] $ ssh root@bitcoin -p 1234
$ ip addr 
Now let's understand the current network topology.
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 92:53:fb:96:86:27 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2400:6180:0:d0::1f6:2001/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::9053:fbff:fe96:8627/64 scope link
       valid_lft forever preferred_lft forever

We can see there are two network interfaces – lo, the loopback interface, and eth0, the Internet facing interface. For the loopback (lo), it’s assigned the address (IPv4) and ::1/128 (IPv6). For the Ethernet (eth0), it has four addresses. The first two are the public and private IPv4 addresses and the second two are the public and private IPv6 addresses, respectively.

We won’t be needing an networking within a private LAN so we’ll remove the internal addresses from the list of routes.

# ip addr del dev eth0 # ip addr del fe80::9053:fbff:fe96:8627/64 dev eth0

Next we’ll enable a simple stateful firewall to prevent errant access to the box. Copy this to the root directory and use `iptables-restore < iptables` to use it. Make sure you set the correct SSH port as you’ll be needing it to log into the box.

 # iptables IPv4 simple config (bitcoin node)
 # v0.0.1
 # use at your own risk
 # 1. Basics, loopback communication, ICMP packets, established connections
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p icmp --icmp-type any -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 # 2. Ensuring connections made are valid (syn checks, fragments, xmas, and null packets)
 -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
 -A INPUT -f -j DROP
 -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
 -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
 # 3. Connections for various services, including SSH and Bitcoin
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 18333 -j ACCEPT
 #4. Log problems and set default policies for anything else
 -A INPUT -j LOG --log-level 7 --log-prefix "iptables dropped: "

Once loaded, make sure the iptables service starts on every boot.

 # yum install iptables-services
 # systemctl start iptables
 # systemctl enable iptables
 # iptables-restore < iptables
 # iptables -L

You should now see the policies enabled. Let’s do the same for IPv6.

 :INPUT DROP [0:0]
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p ipv6-icmp -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 -A INPUT -d fe80::/64 -p udp -m udp --dport 546 -m state --state NEW -j ACCEPT
 -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
 -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 18333 -j ACCEPT
 -A INPUT -j LOG --log-prefix "ip6tables dropped: " --log-level 7
 -A INPUT -j REJECT --reject-with icmp6-adm-prohibited
 -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

Good so far. Let’s make these the default rules.

# iptables-restore > /etc/sysconfig/iptables
# ip6tables-restore > /etc/sysconfig/ip6tables

Stage two: configuring the Bitcoin node

Now, let’s get started with configuring the Bitcoin node. Begin by creating a local user account you’ll use to manage the service from now on.

 # adduser user
 # passwd user
 # gpasswd -a u er wheel
 # visudo // check that wheel is enabled on Centos

Login as the user and download and configure Bitcoin.

$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/bitcoin-0.15.1-x86_64-linux-gnu.tar.gz
$ curl -O https://bitcoin.org/laanwj-releases.asc
$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/SHA256SUMS.asc
$ gpg --quiet --with-fingerprint laanwj-releases.asc
$ gpg --import laanwj-releases.asc
$ gpg --verify SHA256SUMS.asc

The blockchain will be stored on an attached 250GB storage drive. We’ll mount it, format it, and configure it for hosting the blockchain. Additionally, we’ll add it to fstab so it is attached at boot.

$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-01
$ sudo mkdir -p /mnt/xbt-blockchain
$ sudo mount /dev/disk/by-id/scsi-01 /mnt/xbt-blockchain
$ sudo chown user:user /mnt/xbt-blockchain
$ echo '/dev/disk/by-id/scsi-01 /mnt/xbt-blockchain ext4 defaults 0 0' | sudo tee -a /etc/fstab

Next, we’ll configure bitcoin.conf to starting the daemon on the testnet first.

 $ tar xf bitcoin-0.15.1-x86_64-linux-gnu.tar.gz ~/
 $ touch /mnt/xbt-blockchain/bitcoin.conf
 $ vim /mnt/xbt-blockchain/bitcoin.conf

 # bitcoin.conf
 # v0.0.1
 # Use at your own risk
 # if low on memory

Let’s test the configuration.

$ ~/bitcoin-0.15.1/bin/bitcoind -datadir=/mnt/xbt-blockchain &
$ ~/bitcoin-0.15.1/bin/bitcoin-cli -datadir=/mnt/xbt-blockchain
> uptime

Everything should be looking good at this point. Now, let’s enable the daemon to connect to mainnet. Set the testnet=1 boolean to 0 in the bitcoin.conf file and restart the daemon.

Congratulations — you’ve configured a full node. It will take a while to sync.

Using your Gmail contacts in Mutt
27 June 2009

I really enjoy using Mutt as my email client. However, sometimes I have to log
into my Gmail account to view my contacts. Tired of this, I exported my Gmail
contacts and imported them into abook.

Now I can view my Gmail contacts in Mutt.

Here’s how I did it:

(Sorry, no cut and paste instructions.)

* Export your Gmail contacts in the vcard format

* Download the abook source and patch it with the vcard diff (available on the abook website). NOTE: you can also use the vcard2abook.pl script available in the contrib/ dir in the source.

* Import your contacts by invoking abook with the following options: ‘–convert –informat vcard –infile INPUT.vcf –outformat abook –outfile ~/.abook/addressbook’

* Configure your abook (see `man abookrc`). abook has sane defaults so your config file can be very minimal:

set www_command=elinks
set add_email_prevent_duplicates=true

* Next, configure mutt to interact with abook. I added the following lines to my muttrc
set query_command=”abook –mutt-query ‘%s'”
macro index,pager A “abook –add-email-query” “add the sender to the address book”

That’s all :). Press A while in Mutt to add a contact and Q to query the address book.

Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater