logo
Antony Jepson
🤔 About
✍️ Contact
22 Sep 2020
 

Wireguard + other VPNs managed by NetworkManager
13 February 2020

I’ve been using a mixture of Wireguard and OpenVPN devices in NetworkManager recently.  As mentioned in my prior post, I mark the Wireguard interface as unmanaged and bring up the interface manually with a shell script.  Most *.ovpn files have redirect-gateway def1 which sends all traffic, except the local LAN traffic, over the VPN.  This means that separate LANs you’ve created outside of the VPN will be unaccessible.

There are a couple ways to resolve this depending on how you have the VPN connection configured.  In my case, I have a *.ovpn file that I import into NetworkManager and the routes are included in the file.  It’s common for OpenVPN files to contain a “redirect-gateway def1” clause which causes all network traffic originating from the client to pass through the OpenVPN server.  (Side note: def1 uses 0.0.0.0/1 and 128.0.0.0/1.

To resolve this, I added a new line into my *.ovpn file to reference the existing Wireguard LAN I created.

route WIREGUARD_SUBNET SUBNET_MASK net_gateway

You can also resolve this by adding a new “via” route to the kernel routing table.

Note — if you followed my guide exactly in the prior post then this route will already exist and no change will be needed.

plkt.io
7 June 2019

My plkt.io website is live and built with GatbsyJS and AWS Amplify.  I’m really excited about the possibilities here.  Head on over to read about my plans: plkt.io.

Using Amplify to launch your first Gatsby app
29 May 2019

I’m building the future of this website on plkt.io. I’m using React, Gatsby, and Amplify to build a single page application that’ll be performant, accessible, and modular.
This tutorial will show you how I got started from only a web development background of HTML/CSS/Javascript.

Resources and definitions

React: JavaScript library for building using interfaces using declarative components and reusable components. Created by Facebook.
Gatsby: JavaScript framework for building static web apps. Created by thousands of community members.
Amplify: PaaS that coordinates between backend and frontend across various platforms. Created by Amazon.

Installing software

Get started by installing React followed by Gatsby and Amplify. I’m using Gentoo as my local dev environment so I will use Portage to install React and the npm package manager. If you don’t have git installed you will need that too.
# emerge --ask --verbose net-libs/nodejs dev-vcs/git
# npm install --global gatsby-cli @aws-amplify/cli

Create sample app

Once installed, we’ll use Gatsby to serve a basic page and then enable remote building and deployment using Amplify.
~ $ gatsby new hello-world
~ $ cd hello-world
~/hello-world $ gatsby develop

Now we have a new hello world single page application viewable at localhost:8000. Open the provided address and make sure you can access the web page. If you’re using Visual Studio Code remote development, then forward the port to your local machine.

Deploy to Amazon S3 with Amplify

Next we can configure Amplify as a publishing path. You can use the defaults for most options. Some of the configuration will require you to open the AWS Web Console to create IAM users.

~/hello-world $ amplify configure
~/hello-world $ amplify init

Now we can immediately publish the sample app to an S3 bucket with one command. Run this command and select “DEV (S3 only with HTTP)
~/hello-world $ amplify add hosting

After about 2 – 3 minutes you should see a link to the S3 bucket with your web app deployed.

And that’s it — you now have used the modern JavaScript, API, Mobile stack to deploy a web app. I highly recommend you install VS Code and walk yourself through the tutorials available on the Gatsby and Amplify homepages.
Gatsby tutorial: https://www.gatsbyjs.org/tutorial/part-one/
Amplify tutorial: https://aws-amplify.github.io/docs/js/react

Immediate broken pipe when connecting via SSH on Gentoo under VMWare Fusion

Recently when cloning repositories through GitHub I was facing immediate broken pipe issues.

bnt@gentoo ~ $ ssh git@github.com
packet_write_wait: Connection to 140.82.118.4 port 22: Broken pipe

The fix is to change the IP Quality of Service setting to “throughput”.

~/.ssh/config
Host *
  IPQoS=throughput

This is a known issue and being tracked by a bug on the open-vm-tools GitHub repository [link]. I’m running VMWare Fusion 11.0.3 and open-vm-tools 10.3.10 on Gentoo with kernel 4.19.27 and ssh version 7.9_p1-r4.

Quickly setting up a local Tiny Tiny RSS instance with Docker
3 January 2019

I’ve been manufacturing scenarios to test Docker lately and one which came to mind is setting up a local RSS reader instance. It took me a while to parse all the necessary config but the result is very satisfying. You’ll need Docker and Docker Compose to run the below.

The below creates two Docker instances, one for the database, one for Apache + PHP upon which Tiny Tiny RSS runs.

This assumes also that you already have something running on the :80 and :443 ports so we’ll forward them to a different port.

Create the volume for the Postgres container.

$ mkdir -p ~/docker/rss
$ cd ~/docker/rss
$ mkdir database_vol

Create the Dockerfile for the web frontend.

# Dockerfile-web
FROM php:7.2-apache
WORKDIR /var/www/html
RUN apt-get update && apt-get install -y libpq-dev git libpng-dev libfreetype6-dev libjpeg62-turbo-dev libxml2-dev
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include --with-jpeg-dir=/usr/include
RUN docker-php-ext-configure xml --with-libxml-dir=/usr/include
RUN docker-php-ext-install pdo_pgsql mbstring json gd xml opcache pgsql intl
RUN docker-php-ext-enable opcache
RUN git clone https://tt-rss.org/git/tt-rss.git ./
RUN chown -R www-data:www-data /var/www/html
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"

Now create the docker compose file to bundle the containers together.

# docker-compose.yml
version: "2"
services:
database:
image: postgres
volumes:
- /var/lib/postgresql
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
networks:
- net-backend

web:
build:
context: ./
dockerfile: Dockerfile-web
networks:
- net-backend
volumes:
- /var/www/html
ports:
- "8080:80"
depends_on:
- database

networks:
net-backend:
driver: bridge

Now we can start the Docker containers and access it at http://<docker host>:8080/install

$ docker-compose -f docker-compose.yml up -d web

In the configuration page, enter the following for the database information:
Database type: PostgreSQL
Username: docker
Password: docker
Database name: docker
Host name: database
Port: 5432

Then copy the provided config into /var/html/www/config.php file.

You’re all set. Want to get started quickly? Clone my git repository that has the files already created.

$ git clone https://github.com/xocite/docker-ex-ttrss
Setting up a Docker Pi-hole DNS server for wired and wireless clients
21 November 2018

Pi-Hole is a DNS resolver that prevents the resolution of common ad-hosting networks. I have a server in my household that I wanted to run as a Pi-hole server for both Ethernet and wireless clients. Here’s how I did it. Keep in mind that when changing the network configuration it’s wise to do it incrementally and test each step to avoid making a mistake and not being able to troubleshoot. In addition, Pi-hole was originally designed to be the only thing installed on a Raspberry Pi so to make the configuration less invasive on my existing system, I’ll be using the official Docker container. For a much simpler installation, go ahead and run the curl | bash command on their home page.

Network topology

You’ll need to get a good idea of your current network topology before continuing. In my case, I wanted to let this be opt-in for other clients on the network because I didn’t want to cache other people’s DNS requests. This means I wouldn’t alter the DNS settings on the router.
First, I mapped out my current network topology. This is pretty easy to do if you just trace the cables in the house. Your set up will probably match mine:

  • WAN from your internet provider connects to to a DOCSIS modem.
    • This modem provides WiFi (normally 802.11ac) to your IoT devices, mobile phones, and other connected devices.
    • It may also be connected to a wireless repeater to resolve deadspots in the house.
      It also provides wired Ethernet.
  • This wired Ethernet may be connected to a switch to reduce cables across the home.
  • It may optionally have telephone ports for VoIP.

A simpler home set up might only have wireless clients.

My configuration mirrors the above and my server is connected to the switch mentioned. Next step is to look at the current configuration according to your devices. You’ll need to gather the interface settings for your router and your server.

In my case,

  • Router
    • Connected to: WAN from internet provider
    • IP address: 192.168.0.1
    • DHCP settings: 192.168.0.2 to 192.168.254, subnet mask: 255.255.255.0
    • Built in DNS server available on: 192.168.4.100 and 192.168.8.100
  • Server
    • Connected to: switch, which is connected to modem
    • IP address (Ethernet): 192.168.0.2
    • IP address (Wireless): not configuredDHCP settings: same as router

With this in mind, we want to configure the server to act as a wireless hotspot for the Ethernet connection while also providing DNS for both wireless and wired clients. Fortunately, this is pretty simple to do, once you know which apps and files are needed.
This guide uses Debian 9 and NetworkManager.
First, we’ll configure the wireless access point and make sure clients can connect. Look at your current configuration:

$ nmcli
eno1: connected to Wired connection 1
"Intel Ethernet Connection I217-LM"
ethernet (e1000e), AA:BB:CC:DD:EE:FF, hw, mtu 1500
ip4 default
inet4 192.168.0.2/24
route4 169.254.0.0/16

wlp3s0: disconnected
"Intel Wireless"
wifi (iwlwifi), AA:BB:CC:DD:EE:FF, hw

lo: unmanaged
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
servers: 194.168.4.100 194.168.8.100
interface: eno1
Next, create a wireless hotspot, confirm you can connect, and then delete it.
$ sudo nmcli --show-secrets dev wifi hotspot
Hotspot password: xMNUYLGH
Device 'wlp3s0' successfully activated with '95f843c0-18b4-4133-a27f-9d3eb12be8e7'.
[.. connect to the device ..]
$ sudo nmcli connection down uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7
$ sudo nmcli connection delete uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7

Now that we’re certain we can create a hotspot we can configure it to our preferences.

Pi-hole with Docker

Installing Docker is relatively simple. We’ll enable the HTTPS functionality for their repository and then download the Community Edition of Docker.

$ sudo apt install gnupg2 curl ca-certificates apt-transport-https software-properties-common

Install their GPG key. You can verify the fingerprint by comparing the output from the below command with their official documentation [link]. Last time I checked, the fingerprint’s last 8 characters were: 0x0EBFCD88.

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Next, enable the stable repository for this release. In my case I’m using Debian Stretch.

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"

Finally, download Docker.

$ sudo apt update
$ sudo apt install docker-ce

Confirm that it works.

$ sudo docker run hello-world

If this works, add yourself to the Docker group and log out and then log in.

$ sudo usermod -aG docker `whoami`

Now we can launch the Pi-hole Docker container and configure it to act as a DNS server. We’ll use the following configuration settings.

  • Host mode: meaning that container’s network stack is shared with the host. This will be necessary when exposing ports 53 for DNS and 80 for the web interface, and 443 for SSL ads.
  • DNS from Cloudflare: 1.1.1.1
  • Environmental variables
  • ServerIP=192.168.0.2; the IP of the server on the local network
$ docker pull pihole/pihole
$ mkdir -p ~/local/docker/pihole/pihole/etc/{pihole,dnsmasq.d}
$ docker run \
--name pihole \
-p 80:80 \
-p 53:53/tcp \
-p 53:53/udp \
-p 443:443/tcp \
-p 443:443/udp \
-v ~/local/docker/pihole.pihole/etc/pihole:/etc/pihole \
-v ~/local/docker/pihole.pihole/etc/dnsmasq.d:/etc/dnsmasq.d \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
-e ServerIP=192.168.0.2 \
-e IPv6=False \
-e DNS1=192.168.4.100 \
-e DNS1=192.168.8.100 \
-e WEBPASSWORD=password \
pihole/pihole:latest

If you get some sort of error such as “Couldn’t bind to :80 because already in use”, correct the error, delete the container, and try again.

$ sudo systemctl stop apache2
$ sudo systemctl disable apache2
$ docker container list -a
$ docker container rm <container>

Now finally, connect to your container by navigating to http://<server_ip> on a different computer.

You can also check that your container has network access by:

$ docker container exec pihole ping www.google.com

Now the Docker container is up and running, go ahead and change the settings on your wired interface to use the IP address of your server as the DNS address.

For wireless clients, we’ll go ahead and configure the hotspot again, this time setting the DNS to use our server. Notice that due to installing Docker our networking configuration has changed.

$ sudo nmcli
docker0: connected to docker0
bridge, 02:42:FB:FA:35:DE, sw, mtu 1500
inet4 172.17.0.1/16
inet6 fe80::42:fbff:fefa:35de/64

veth9259d68: unmanaged
ethernet (veth), 72:FD:6C:AD:CE:D9, sw, mtu 1500

DNS configuration:
servers: 194.168.4.100 194.168.8.100
interface: eno1

Now we have two more interfaces: docker0 and veth9259d68. Unfortunately, on my end when I create the hotspot, clients aren’t issued an IP address. Let’s debug NetworkManager and see what routes are being created.

Create the hotspot with nmcli

$ sudo nmcli --show-secrets dev wifi hotspot

Now, we’ll use the lower level networking tools to see what’s happening.

$ ip r
default via 192.168.0.1 dev eno1 proto static metric 100
10.42.0.0/24 dev wlp3s0 proto kernel scope link src 10.42.0.1 metric 600
169.254.0.0/16 dev eno1 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.0.0/24 dev eno1 proto kernel scope link src 192.168.0.2 metric 100

Next, let’s look at the configuration file NetworkManager creates for the hotspot.

$ cat /etc/NetworkManager/system-connections/Hotspot
[connection]
id=Hotspot
uuid=2473d7a3-4e0f-40d9-b239-72e52c6fad63
type=wifi
autoconnect=false
permissions=

[wifi]
hidden=true
mac-address=AC:FD:CE:87:84:D0
mac-address-blacklist=
mode=ap
ssid=Hotspot-luv

[wifi-security]
group=ccmp;
key-mgmt=wpa-psk
pairwise=ccmp;
proto=rsn;
psk=ZoKpIEU4

[ipv4]
dns-search=
method=shared

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore

Here, the culprit is the [ipv4] method=shared line. In the nm-setting-ip4-config.c file, we can see the following description for this setting.

* NetworkManager supports 5 values for the #NMSettingIPConfig:method property
* for IPv4. If “auto” is specified then the appropriate automatic method
* (DHCP, PPP, etc) is used for the interface and most other properties can be
* left unset. If “link-local” is specified, then a link-local address in the
* 169.254/16 range will be assigned to the interface. If “manual” is
* specified, static IP addressing is used and at least one IP address must be
* given in the “addresses” property. If “shared” is specified (indicating that
* this connection will provide network access to other computers) then the
* interface is assigned an address in the 10.42.x.1/24 range and a DHCP and
* forwarding DNS server are started, and the interface is NAT-ed to the current
* default network connection. “disabled” means IPv4 will not be used on this
* connection.

So from this description, it seems like the problem is the DHCP and forwarding DNS server aren’t starting correctly. Let’s look at the NetworkManager logs and see if anything is awry. We’ll also stop the Pi-hole container to avoid any other issues.

$ docker stop pihole
$ sudo journalctl -u NetworkManager --since "1 hour ago"

Walking through the logs is quite enlightening. (1) We see that NetworkManager creates IPtables entries for the interface, including to forward DNS and DHCP ports to the local DNSmasq instance. (2) We see that dnsmasq-manager failed to create a listening socket due to the address already in use by the Docker container.

Now – before rushing ahead and trying to fix this, it’s important to restate what we’re trying to accomplish here. Approaching the problem with the mindset of “how do I fix this” is wrong and will lead you down a DuckDuckGo / StackOverflow rabbit hole. In this scenario, we’re trying to issue an IP address to clients on the wlp3s0 interface. In addition, we want these clients to use the server as the DNS server so their DNS requests go through the Pi-hole Docker container.

Modify the default settings for shared IP interfaces.

$ sudo vim /etc/NetworkManager/dnsmasq-shared.d/default.conf
# Disable local DNS server
port=0

# Use Pi-hole for DNS requests
dhcp-option=option:dns-server,192.168.0.2,192.168.4.100

Now try restarting the docker container and the wireless hotspot. Check the log for errors.

$ docker start pihole
$ sudo nmcli --show-secrets dev wifi hotspot
$ sudo journalctl --since "1 minute ago" -u NetworkManager

No errors should be seen. Connect via your wireless device and confirm that new blocked entries are being inserted into the Pi-hole dashboard by going to your server IP address.

So in summary, we set up Pi-hole on Docker in Debian Stretch to block common adhosting networks for both wired and wireless clients on our home network. For me, this was a good test scenario to become more familiar with Docker.

Overall, I think that host based ad-blocking won’t be effective much longer as more and more content gets bundled with ads behind content delivery networks. The best practice regarding ads, in my opinion, is to only visit sites with acceptable ad practices. This means no pop-overs/pop-unders or stealing focus as well as not tracking you incessantly across the web. I suspect that ad-blocking has and will continue to move client-side. A simple way to avoid the most nefarious of ads is to use the Mozilla multi-container extension which lets you separate your online life into separate entities.

 

 

Sources

https://wireless.wiki.kernel.org/en/users/Documentation/rfkill

https://unix.stackexchange.com/questions/234552/create-wireless-access-point-and-share-internet-connection-with-nmcli

https://docs.docker.com/install/linux/docker-ce/debian/#set-up-the-repository

https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/

https://gitlab.freedesktop.org/NetworkManager/NetworkManager/tags/1.6.2

https://github.com/jwilder/nginx-proxy

Running a Bitcoin node
1 January 2018

Setting up a Bitcoin node can be a bit daunting, especially considering the amount of disc space required and that the node needs to be always connected. However, once configured maintenance can be relatively hands-off. For more information about the minimum requirements please see here.

This tutorial will be split into two stages. One: configuring the server itself to be relatively secure and resilient against basic attacks and two: configuring the Bitcoin daemon on the server.

Stage one: securing the server

Let’s get the system up to date and then configure the stateful firewall.

# yum upgrade
# yum install vim iptables-service

And we’ll move SSH to a different port so we can reduce the number of login attempts considerably. As this is CentOS, SELinux will need to be informed of the change to allow the SSH daemon to bind to the new port.

# vim /etc/ssh/sshd_config
Set Port to 1234 or something non-standard
# semanage port -a -t ssh_port_t -p tcp 1234 
# systemctl reload sshd

And log back in using the new port to take a look at the network interfaces.

[user@local] $ ssh root@bitcoin -p 1234
$ ip addr 
Now let's understand the current network topology.
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 92:53:fb:96:86:27 brd ff:ff:ff:ff:ff:ff
    inet 128.199.93.101/18 brd 128.199.127.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2400:6180:0:d0::1f6:2001/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::9053:fbff:fe96:8627/64 scope link
       valid_lft forever preferred_lft forever

We can see there are two network interfaces – lo, the loopback interface, and eth0, the Internet facing interface. For the loopback (lo), it’s assigned the address 127.0.0.1/8 (IPv4) and ::1/128 (IPv6). For the Ethernet (eth0), it has four addresses. The first two are the public and private IPv4 addresses and the second two are the public and private IPv6 addresses, respectively.

We won’t be needing an networking within a private LAN so we’ll remove the internal addresses from the list of routes.

# ip addr del 10.15.0.5/16 dev eth0 # ip addr del fe80::9053:fbff:fe96:8627/64 dev eth0

Next we’ll enable a simple stateful firewall to prevent errant access to the box. Copy this to the root directory and use `iptables-restore < iptables` to use it. Make sure you set the correct SSH port as you’ll be needing it to log into the box.

 # iptables IPv4 simple config (bitcoin node)
 # v0.0.1
 # use at your own risk
 *filter
 # 1. Basics, loopback communication, ICMP packets, established connections
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p icmp --icmp-type any -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 # 2. Ensuring connections made are valid (syn checks, fragments, xmas, and null packets)
 -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
 -A INPUT -f -j DROP
 -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
 -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
 # 3. Connections for various services, including SSH and Bitcoin
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 18333 -j ACCEPT
 #4. Log problems and set default policies for anything else
 -A INPUT -j LOG --log-level 7 --log-prefix "iptables dropped: "
 -P OUTPUT ACCEPT
 -P FORWARD DROP
 -P INPUT DROP
 COMMIT

Once loaded, make sure the iptables service starts on every boot.

 # yum install iptables-services
 # systemctl start iptables
 # systemctl enable iptables
 # iptables-restore < iptables
 # iptables -L

You should now see the policies enabled. Let’s do the same for IPv6.

 *filter
 :INPUT DROP [0:0]
 :FORWARD DROP [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p ipv6-icmp -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 -A INPUT -d fe80::/64 -p udp -m udp --dport 546 -m state --state NEW -j ACCEPT
 -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
 -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP
 -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 18333 -j ACCEPT
 -A INPUT -j LOG --log-prefix "ip6tables dropped: " --log-level 7
 -A INPUT -j REJECT --reject-with icmp6-adm-prohibited
 -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited
 COMMIT

Good so far. Let’s make these the default rules.

# iptables-restore > /etc/sysconfig/iptables
# ip6tables-restore > /etc/sysconfig/ip6tables

Stage two: configuring the Bitcoin node

Now, let’s get started with configuring the Bitcoin node. Begin by creating a local user account you’ll use to manage the service from now on.

 # adduser user
 # passwd user
 # gpasswd -a u er wheel
 # visudo // check that wheel is enabled on Centos

Login as the user and download and configure Bitcoin.

$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/bitcoin-0.15.1-x86_64-linux-gnu.tar.gz
$ curl -O https://bitcoin.org/laanwj-releases.asc
$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/SHA256SUMS.asc
$ gpg --quiet --with-fingerprint laanwj-releases.asc
$ gpg --import laanwj-releases.asc
$ gpg --verify SHA256SUMS.asc

The blockchain will be stored on an attached 250GB storage drive. We’ll mount it, format it, and configure it for hosting the blockchain. Additionally, we’ll add it to fstab so it is attached at boot.

$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-01
$ sudo mkdir -p /mnt/xbt-blockchain
$ sudo mount /dev/disk/by-id/scsi-01 /mnt/xbt-blockchain
$ sudo chown user:user /mnt/xbt-blockchain
$ echo '/dev/disk/by-id/scsi-01 /mnt/xbt-blockchain ext4 defaults 0 0' | sudo tee -a /etc/fstab

Next, we’ll configure bitcoin.conf to starting the daemon on the testnet first.

 $ tar xf bitcoin-0.15.1-x86_64-linux-gnu.tar.gz ~/
 $ touch /mnt/xbt-blockchain/bitcoin.conf
 $ vim /mnt/xbt-blockchain/bitcoin.conf

 # bitcoin.conf
 # v0.0.1
 # Use at your own risk
 listen=1
 server=1
 rpcport=8332
 rpcallowip=127.0.0.1
 listenonion=0
 maxconnections=16
 datadir=/mnt/xbt-blockchain
 testnet=1
 disablewallet=1
 # if low on memory
 dbcache=20
 maxmempool=300

Let’s test the configuration.

$ ~/bitcoin-0.15.1/bin/bitcoind -datadir=/mnt/xbt-blockchain &
$ ~/bitcoin-0.15.1/bin/bitcoin-cli -datadir=/mnt/xbt-blockchain
> uptime

Everything should be looking good at this point. Now, let’s enable the daemon to connect to mainnet. Set the testnet=1 boolean to 0 in the bitcoin.conf file and restart the daemon.

Congratulations — you’ve configured a full node. It will take a while to sync.

Using your Gmail contacts in Mutt
27 June 2009

I really enjoy using Mutt as my email client. However, sometimes I have to log
into my Gmail account to view my contacts. Tired of this, I exported my Gmail
contacts and imported them into abook.

Now I can view my Gmail contacts in Mutt.

Here’s how I did it:

(Sorry, no cut and paste instructions.)

* Export your Gmail contacts in the vcard format

* Download the abook source and patch it with the vcard diff (available on the abook website). NOTE: you can also use the vcard2abook.pl script available in the contrib/ dir in the source.

* Import your contacts by invoking abook with the following options: ‘–convert –informat vcard –infile INPUT.vcf –outformat abook –outfile ~/.abook/addressbook’

* Configure your abook (see `man abookrc`). abook has sane defaults so your config file can be very minimal:

set www_command=elinks
set add_email_prevent_duplicates=true

* Next, configure mutt to interact with abook. I added the following lines to my muttrc
set query_command=”abook –mutt-query ‘%s'”
macro index,pager A “abook –add-email-query” “add the sender to the address book”

That’s all :). Press A while in Mutt to add a contact and Q to query the address book.

Elinks advanced URI management
1 June 2009

In the process of whipping elinks into a tame beast, I discovered several options that I wasn’t aware of.

URI rewriting: allows me to execute Google searches, dictionary lookups, and imdb queries within the ‘Goto URL’ dialog box. It should be enabled by default but if not, go to the Option manager -> Protocols -> URI rewriting and select either Dumb or Smart Prefixes. Once enabled, you can type ‘g search_term’ in the search box to quickly search Google for search_term. Look at the other prefixes to determine your options.

I wasn’t satisfied with having to use a prefix every time I wanted to search, so I modified the default template (Option manager -> Protocols -> URI rewriting -> Default template) to ‘http://www.google.com/search?q=%s’. This means that any entry in the ‘Goto URL’ dialog box that doesn’t look like a URL, a file, or an existing prefix will result in a Google search with that string!

Sessions: It turns out that elinks is a beast with many abilities, I wasn’t aware that it supported session saving/restoring (Option manager -> User interface -> Sessions).

URI passing: Often, when on the console, I’ll want to share a link with someone but I’ll be restricted by the atrocious lack of copy/paste. To solve the issue while using elinks, I enabled the link-external-command and tab-external-command options to save the highlighted link and current link, respectively, to a file.

To enable it, first go into the keybinding manager, toggle the display, and choose a keybinding for ‘main -> link-external-command’. This will be the keybinding you will press when you want to save the highlighted URL to a file. Likewise, choose a keybinding for ‘main -> tab-external-command.’ This is the keybinding you will press when you want to save the URL of the current tab to a file. After setting those shortcuts, go to ‘Option manager -> Document -> URI passing’ and create an entry with the contents ‘echo -n %c > ~/url’ (modify as you see fit). From now on, when you use (link|tab)-external-command the URL will be saved in ~/url.

For my inspiration, have a look at .

Setting up IMAP on Mutt
18 October 2008

After receiving my HP 2710p, I decided to use IMAP instead of POP for
managing my email.  As you know, Mutt is my primary MUA (mail user agent).

The benefits of this setup are obvious: synchronous email (both on the computer
and on the webserver), space savings, and less programs to configure.

In this example, I’ll be setting up an IMAP connection to Gmail from Mutt.

First download the certificate of the organisation that is providing the IMAP
service.  Gmail uses “ThawtePremiumServerCA.crt”.  Make sure you compare your
checksum with the one provided by the Thawte Certificate Authority.  Place this
certificate in “~/.certs”.

Next, configure a profile for use with the IMAP connection.  I have an
engineering account with my university as well as a Gmail account, so I need two
profiles.  The Mutt profile is simple.

---~/.mutt/mutt1---
# email@gmail.com

source ~/.muttrc
set imap_user="email@gmail.com"
set folder="imaps://imap.gmail.com:993"
set spoolfile = "+INBOX"
set postponed=+"[Gmail]/Drafts"
set smtp_url="smtp://email@smtp.gmail.com:587/"
------

Note: My ~/.muttrc” contains global configuration values.

Next, simply add aliases for each profile that you create.  For example:

---~/.bashrc---
[ ... ]
alias mutt1="mutt -F ~/.mutt/muttrc_gmail"
alias mutt2="mutt -F ~/.mutt/muttrc_college"
------

To use your configuration, issue “mutt1” or “mutt2” at the prompt.

Built with Wordpress and Vim
© 2008 to 2020 Antony Jepson