Jiff Slater
🤔 About
✍️ Contact
28 Jul 2021

Migrating to a static site
12 July 2021

I’ve long authored this blog in WordPress because I’ve found the interface homely and easy to maintain.  However, with the advent of cheaper external devices like Raspberry Pis, I wanted to move to a static site and host the website via CDN at my own IP dynamic address, eschewing the need for using expensive hosting solutions. The end goal for this was to move to a more self-rolled knowledge base website where I could store posts in a filesystem hierarchy rather than in a database.

Here’s how I migrated.

Proof of concept post

Generating a test blog post

As I didn’t want to write pure HTML as I felt it led to a bit of an unstructured format that couldn’t be parsed later or used for heavy cross-linking, I explored some options for authoring the files.


First I considered reStructuredText. This is a markup format that was originally created to write Python docs and later, due to its flexibility, became popular for authoring other types of documents. Here’s what a basic *.rst file looks like.

Welcome to my blog

Here's a *few* pointers to get you started.

- My main blog: `plkt.io <https://plkt.io>`_.
- My preferred search engine: `DuckDuckGo <https://ddg.co>`_.

It can be converted into HTML with pandoc.

$ pandoc –tab-stop 2 -f rst -t html sample.rst

<h1 id="welcome-to-my-blog">Welcome to my blog</h1>
Here's a <em>few</em> pointers to get you started.
<li>My main blog: <a href="https://plkt.io">plkt.io</a>.</li>
<li>My preferred search engine: <a href="https://ddg.co">DuckDuckGo</a>.</li>

A very clear and straightforward way to write blog posts in the terminal but I felt I wasn’t really getting that much more of an advantage from writing directly in the WordPress text editor because there wasn’t any semantic mark up.


Next, I looked up DocBook which is heavily used as an authoring format for writing books and technical documents. As this is the primary content on this blog, it seemed worth exploring. At first, I saw many old examples online of how to author a simple document and was immediately horrified by the referenced to the Document Type Declaration (DTD) that harkens from old HTML+XML days.

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC '-//OASIS//DTD DocBook XML V4.5//EN'
<article lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>

Fortunately, according to the latest DocBook 5.0 standard, this arcane incantation wasn’t required and what we have today is something like this, notice the xmlns portion of the article element. As a side note, I found out that the Linux Kernel documentation has started migrating away from DocBook to using Sphinx + reStructuredText. Read more about this here.

<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en">
<title>Sample article</title>
<para>This is a very short article.</para>

So, after spending about 30 minutes using the documentation, I managed to re-write the *.rst example above into the following.

<?xml version="1.0" encoding="utf-8"?>
<article xmlns="http://docbook.org/ns/docbook" version="5.0" xml:lang="en"
<title>Welcome to my blog</title>
<title>Welcome to my blog</title>
<para>Here's a <emphasis>few</emphasis> pointers to get you started.
<itemizedlist mark='dash'>
<para>My main blog <link xlink:href="https://plkt.io">plkt.io</link>
<para>My preferred search engine <link xlink:href="https://ddg.co">DuckDuckGo</link>

Quite a bit more verbose and a bit of a pain to type in Vim. Omni-completion C-X,C-O helped a lot in closing tags. I can understand a bit better why reStructuredText, although not as structured as this, reduces the initial hump to get started so will likely result in more up to date documentation.

Selecting a winner

From the explorations above, I finalised the decision on reStructuredText for the mark up format. However, before fully embracing the format, I needed to land on the structure of the site. I had decided that the content itself would serve as the structure rather than living within the structure. Put another way, I decided that one would read my content to determine the taxonomy rather than the content be defined by the taxonomy. This meant that hyperlinking would be very important (and very manual) so I would need to make a stronger effort to keep related documents connected to each other.


I envisioned my static site to enventually converge to something resembling a MediaWiki site.

The structure would be like this:

This seemed like a good first pass for the organisation and something that I could change in the future without too much effort.

Creating the base site

I created the base directory structure and started defining the metadata for the site. As I was using pandoc, this would be split across three files: the template file for the HTML generator, the metadata file to contain variables across the entire site, and the YAML metadata at the top of each *.rst file that contained file specific references.

As Pandoc doesn’t support YAML headers in *.rst files (yet, see here) I’m storing all my YAML metadata in a side file called *.yaml for each post. While it’s not ideal, it’s simple and maintainable in the makefile.


It took me a couple hours to walk through the documentation and build this makefile. However, I feel that as this project becomes more complicated this will pay off. Here’s what I came up with as a basis.

# Makefile for plkt.io


PANDOC := pandoc
FIND := find
PANDOCOPTS := –tab-stop 2 -s –template ./template.txt -f rst -t html -M “lang: en” -M “css: ./style.css” -B header.html -A footer.html

# Note that $(shell <>) replaces newlines with spaces.

dir := ./
src := $(shell $(FIND) $(DIR) -name “*.rst”) # TODO: Do this using make rules.
targets_html := $(src:.rst=.html)

%.html: %.rst
@echo “Compiling” $<
@$(PANDOC) $(PANDOCOPTS) –metadata-file=$(basename $<).yaml $< > $@

all: build

build: $(targets_html)

# Not yet implemented. Supposed to build the site and tar it up for distribution.
dist: clean build
mkdir -p plkt-${VERSION}
for html in $(targets_html); do \
mv $$html plkt-$(VERSION)/; \
tar -cf plkt-${VERSION}.tar plkt-${VERSION}
gzip plkt-${VERSION}.tar
rm -rf plkt-${VERSION}

.PHONY: clean
@for html in $(targets_html); do \
echo “Cleaning” $$html; \
rm -f $$html; \

Headers and footers

Each file would need a header and footer to maintain some visual consistency across the site. To do this, I referenced the header with the -B flag and the footer with the -A flag.

Setting up apache

For the proof of concept, I opted to not use containerisation and instead just move the *.html and *.css files into the /var/www/html directorie.  Viewing the website at http://localhost:80 worked admirably.

Migrating the post history

Exporting the posts from WordPress was a bit tricky. I first tried to use Pandoc’s automatic converting functionality but then I realised I’d have to do it twice: download the HTML, convert to reStructuredText, and then convert it back to HTML.

pandoc -f html -t rst https://plkt.io/2019/11/30/returning-to-wordpress/

I then landed upon a better method. I modified the template for my website by removing the header, footer, and post listing, visited each page individually, and saved them using Firefox. This took about 15 minutes (probably less time that automating it). Then I shoved the posts into a “archived posts” category that I would move bit by bit into the reStructuredText format.


This exercise taught me a lot about the data storage formats. What is the right way to store my post history? — Should it be a multiplicity format that separates the presentation from the data or should each post stand the test of time as its own standalone file? I’m starting to lean towards the latter. I think it’s possible to have the best of both if scoped properly by having an “archive” section of your site. So go ahead and export that page and leave it up for eternity. Normal visitors can view your normal site with the latest formatting but patreons of antiquity can learn more about the cake is made.

This will be the last post written using WordPress. The next post on this site will be generating using pandoc and Vim :).

The pervasiveness of microabrasions
16 November 2020

**DRAFT – Last Updated 2020-11-16**

The human body has multiple mechanisms to stay adaptable in the ever-changing present.  At the cellular level, there’s an evolving protection against the constant marching battery of micro-organisms.  This protection keeps you alive.  At the macro level, there’s your own personal drive to survive.  These two halves come together to form a cohesive whole – you.

However, these adaptions only work to keep you alive, in an acceptable homeostasis of which the bar is rather low.  Fed regularly?  Check.  Have a shelter?  Check.  Got a job?  Check.  Feel fulfilled?  …  There’s a class of constant onslaughts that we’re not well-adapted for – or you could say we’ve adapted to them inadequently – maladaptations.

These minor antagonists can’t usually be used to propel you into a better place, rather, they’re constantly sanding away your drive to become fulfilled.  Daily unpleasantries you dismiss or accept; an over-extended caffeine habit; a day-ending nightcap; a small lack in regularly activity; less sleep than usual; or a chair that isn’t a good fit.  Truth is, most of our day-to-day activity is structured around the same interactions with the world around us in order to nurse a homeostatic environment.

Due to our maladaptions, we either deny (recognise and reject) or accept (merge with our definition of the world) the things that slowly erode the internal concept of fulfillment – day after day, the concept of a satisfactory life reduces to the idea of a simple, monotonous, manual life of labour where the rewards are self-evident (no introspection or reflection required) and real.

While the idea of disconnecting from it all may sound pleasant, it would onyl be a travesty of the modern life that we’re capable of.  We’re Thinkers, constantly creating abstractions out of language out of language out of language.  Modern computer programming is all about collaboratively editing a shared mental model or map of how a computer works at a specific level of abstraction.  There’s no emotion here even though it’s methological and expressive.  But at the end of the day, every computer engineering knows that a compiler (also an abstraction) will take the inputs and optimise them away for better execution in the more accurate computer model.

For modern language – the maxims of “if it bleeds, it reads” and “emotion deserves promotion” seem to be interwoven into all common watering holes.  This modern way of writing seems to cut through carefully built abstractions which help to promote a shared understanding of the world and antagonise people into a reactive state.  There’s no value here: no deeper meaning, no verbalisation that can define (label) the angst.

Notes from setting up a disconnected Debian Testing server
22 August 2020

I recently set up a new server at-home server that isn’t connected to the Internet. It’s only available on the local network and doesn’t have any connection to the outside world. It’s part of my longer term initiative to have a disconnected household.

Here’s some notes I took while setting this up.


I have servers scattered across various providers and felt a general anxiety about security. While I do feel confident using online storage solutions like AWS S3 Glacier class + GPG encryption, I feel that instance-based compute services would likely cause me a headache in the future. Storing everything locally would be faster and reduce costs as well.


After a fresh installation of Debian Testing, I briefly plugged in the Ethernet cable, set my sources.list to point to ftp.uk.debian.org, and downloaded the latest packages and security updates including vim, sudo, and screen.

For future updates, I download the weekly testing DVD image from Debian’s servers and transfer them via SSH. I’m still working on optimising this.

Each weekly testing DVD is archived in storage.

Screen suspend

As this is a laptop, the screen didn’t suspend automatically when it wasn’t in use. I appended consoleblank=60 to my GRUB_CMDLINE_LINUX_DEFAULT entry.


I successfully migrated my containers from Docker to qemu and systemd-container based deployments. I’ll detail more about this in the future. For each deployment I check that it deploys without problems in another physical machine.

  • GitLab
  • OwnCloud
  • MediaWiki
Alarms as timers
13 May 2020


It was an unusually overcast morning and the birds seemed to be reluctantly chirping, upset at the sun apparently missing its promised sun rise time.  While I was running, my alarm went off.  I realised that I was way ahead of my usual self – I had prepped breakfast, almost finished my run, and had solved a couple long standing issues at work; all while watching the clouds mope across the sky.

This made me realise – having the alarm wake you up at a certain time is all wrong.  Rather, it should be viewed as a timer by which you are ready to attack your day.  Let’s be honest to ourselves, who jolts out of bed with their hair already combed, coffee half drank, and laptop furiously churning through the results streaming in from that advanced SQL query you wrote 10 minutes ago?

Next time, I’ll try to have finished my run and already showered and dressed before the alarm goes off.


My principles of product management
4 March 2020

[Draft post]

Principles of product managemnet

I’ve been in product management for a while and over the course of the past 8 years I’ve learned a lot about the difficulties that aren’t apparent in the popular posts promoting product management as a career choice.

– that the ability to connect with other humans is under-rated…
– that the development team’s well-being is just as important as your own well-being…
– and, that most customers give you bad, and non-actionable feedback…

So from these realisations and my experience, here are my 5 principles of product management.

1) Focus on the customer problem, not the solution.
Too often I see my peers around getting enthralled by some solution that seems to address every customer need. My first question to these budding PMs is: have you defined the customer problem? The second is how to did you define this? More often than not, the series of answers is: Yes, and Gut Feel.

While I do champion gut feel for making some difficult decisions I don’t think it should be ascribed to critical decisions. Rather, focusing on customer data and customer feedback is the reality when deciding on the solution and then moving onto the solution.

2) A/B testing is validation not truth.
Often I find that junior Product Managers want to focus on testing, testing, testing. From experience, the more important portion is crafting a meaningful hypothesis to test rather than getting a metric to be greater than another.

More often than not, if an experiment becomes wins by a wide margin – something is not right. Give credit to the organisation you joined – most of the low hanging fruit has already been eaten. If you have a huge win, it’s highly probably that it’s a fluke. You measured wrong, your hypothesis doesn’t actually address the customer problem, or it was a seasonal occurrence. I encourage you to take a deep look at the data across multiple facets and see if that huge win (which, by the way gets you plenty of kudos) is actually going to improve your product.

3) Failure is not optional.
As a product manager, you should be constantly questioning your own opinions and deductions on how the world works. Your experience, although great (or limited), doesn’t necessarily match how your customers perform. The only way to craft a winning path is to avoid the obstacles along the way and that means making mistakes — fast — and changing direction quickly.

4) You are the blocker
I’ve found that often the PM is the blocker to progress. Why? Because they don’t communicate their vision well enough. Tell the devs exactly what you want — and what you don’t. Work with them to a great compromise. Coordinate with design on your thoughts — and don’t rely on them to eventually iterate into what you want. Check your pulse with user research and quickly incorporate their feedback. Communicate with other product managers what you plan to do so they can adjust as required.

5) Micro pivots are as effective as major movement
Often you’ll instrument something that shows one of your earlier convictions was wrong or even dead wrong. It’s OK to make a small change that fixes that without changing your main product. Not everything needs to be a huge change with tens of other teams involved. Make a judgement call and commit to the changes that you think are necessary to improve the product and be prepared to turn off that feature flag if it doesn’t work and commit it if it does.

These are just a few of the principles that help to guide me when developing a product. Disagree? Agree? Please head over to me on Twitter @plktio or email, see my contact info, and let me know what you think. I’ll publish the best comments on my blog next week!

Building plkt.io
4 March 2019

I’m planning out my new website and homepage: plkt.io. It’s where all my posts will live on a static site that I’ll design and build myself. I’ll also be integrating services such as Gitlab, Discourse, and other interactive open source projects to make my website more inclusive. Furthermore, finances permitting, I’ll be providing a limited amount of bandwidth for sharing custom builds of various open source projects, such as Firefox, to be available for download.

This is a big step for me and the first time I focus on really building a website that truely represents me beyond my writing ability. It’s also the first time I use modern open source web stacks to build a mobile/desktop website.

I also plan to start streaming my coding sessions via Twitch and store recordings on this website. I really think interactive and public coding sessions are only going to become bigger in the future.

Stay tuned for more info and when the first draft will be pushed online.

2019 Goals and Tentatives
9 January 2019

I’m not one to dive into too much detail about my personal, soft development goals but I do have some high level technical-based objectives for this website and myself for the remainder of 2019.

– Build a better landing page for this website that makes it simple to find an article that you’re interested in reading. This website contains over ten years of content and right now all people see is the latest post. Arriving at my page from a web search is OK but doesn’t lend to much appreciation of the site’s content.
– Gather some basic metrics about the site: time spent on each page, etc., without using Google. I already get the default Apache logs but this is really just for debugging page issues. As of now I don’t get many comments on my content so there’s really no feedback loop.
– Cover some AI/ML topics – explore using CUDA acceleration. I’m fascinated by the better approximations afforded by AI/ML and want to see if I can use this to generate a revenue stream.
– Dive deeper into Docker and K8s. Both are quickly becoming the gold standard for containerisation of applications and I would like to see them continue to grow. In some ways, however, I wonder if over usage of the two leads to poor optimisation and growing long term costs for an organisation even when you consider the collaboration benefits.
– Cover basic testing, CI, and CD practices. I have to acknowledge that code and testing now go hand in hand and so would like to become more up to date in this area.
– Optimise the site display on mobile. I don’t have analytics on this but I suspect that most people land here from mobile. The theme I’m using is outdated and doesn’t display well on a low end device.
– Contribute to Firefox – I’m concerned about the homogenisation of browser backends.
– Explore more media @ home solutions like Plex. My usage of YouTube has gone down over the years as I’ve acquired more boxsets and I’d like to watch the higher quality video on my local network instead of through YouTube.
– More posts around system administration. I consider myself an amateur system administrator and can rise to the challenge of most system administration problems. Usually this involves late nights crawling through obtuse documentation and programs not conforming to said documentation but I enjoy the challenge.
– Explore using a remote PC to stream gaming content over a local network. I enjoy my games but don’t like to use an entire PC just for gaming. So, I’d like to pass the graphics card to a virtual machine that can handle gaming on demand.

Let me know if you have any suggestions on the above.

Fresh Ubuntu 18.04 Install
4 January 2019

There’s something insanely refreshing about a brand new Linux install. I recently retired my Fortnite gaming rig and turned it into an Docker/CUDA server for running AI/ML on my two NVidia GPUs. While Fortnite will certainly be missed, there’s much more utility in having a bespoke local Linux server.

Here are the typical things I do upon setting up a fresh bare-metal headless machine.

Schedule a daily wake up
It’s easy to take for granted that you have physical access to the server. But what happens if you shutdown the computer by accident or there’s a power outage? I immediately make sure I have RTC wake up enabled in the BIOS and schedule a daily wake up if the option is available.
I also schedule a daily wake up in the OS as well for added convenience. Unfortunately, if you actually wanted to keep the server off and disconnected your best bet would be to mangle the boot config until you have access again.

Set up the back up storage
All my local servers have an additional drive, either SSD or HDD, for backups. My preference is to have an ext4 mirrored RAID mounted at either /data or /srv. If I set up a *BSD instance, then I’ll opt for ZFS if I can.
Backups are not as automated as I’d like currently as I’m still fine tuning the manual process prior to automating. On Linux, I’ll do a double backup. First I create an LVM logical volume snapshot of the root partition. I then take an image of the ext4 filesystem using e2image, and place that on the additional drive. Here’s a sample invocation:

# lvs
  LV     VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root  vg -wi-ao---- <22.31g
  swap_1 aves-vg -wi-ao---- 976.00m  
# lvcreate --size 1G --snapshot --name root-snapshot-20190104 /dev/vg/root
# lvs
  LV                     VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                  vg owi-aos--- <22.31g
  root-snapshot-20190104 aves-vg swi-a-s---   1.00g      root   0.01
  swap_1                 vg -wi-ao---- 976.00m
# e2image -r -a -p /dev/vg/root-snapshot-20190104 /srv/root-snapshot-20190104.img
# mksquashfs /srv/root-snapshot-20190104.img /srv/root-snapshot-20190104.squashfs -info

It's possible to combine the last command like so - although this seems error prone unless automated in a script.
# mktemp -d
# mksquashfs /tmp/tmp.FM0xNQ0v4r /srv/root-snapshot-20190104.squashfs -p "root-snapshot-20190104.img f 444 root root e2image -r -a /dev/vg/root-snapshot-20190104 -"

For the second back up, I'll just tar up the file system, ignoring /dev and mounted filesystems.
# mktemp -d
# mount /dev/vg/root-snapshot-20190104 /tmp/tmp.1cu7jszuVl
# echo "'root-snapshot-20190104.img': snapshot of system after initial install of Ubuntu 18.04 with OpenSSH server enabled by default" >> /srv/backup-history.log
# tar cf /srv/root-snapshot-20190104.tar --one-file-system /tmp/tmp.1cu7jszuVl
# unmount /tmp/tmp.1cu7jszuVl

And finally I'll remove the snapshot and continue as if nothing happened :).
# lvremove /dev/vg/root-snapshot-20190104

Set the hostname
IP addresses are rather annoying to remember on a DHCP home network. They change frequently and are generally not what you want to use for a custom network. There's two solutions to this, either a) reference the computers by their MAC address; or b) use multicast DNS to reference them by hostname. I always opt for the latter. On macOS, this works out the box. On Ubuntu, you need to install avahi. Normally this automatically configures your nsswitch file but in the event that's not done you'll need to change the hosts line.
$ sudo apt install libnss-mdns
## /etc/nsswitch.conf
hosts: files mdns4_minimal [NOTFOUND=return] dns

Now you can ping your server hostname.local.

Lock down the installation
Usually the only optional component I add is the OpenSSH Server. I generate keys on my local computer and transfer them over using ssh-copy-id. Then I disable root account log in and password authentication.
## /etc/ssh/sshd_config
PasswordAuthentication no
PermitRootLogin no

Get comfy
If you don't feel like you're being plugged into the matrix everytime you open the terminal, you're doing something wrong. The Terminal should feel like the defacto interface to your servers -- at least until something better comes along.

Setting up macOS Mojave
23 November 2018

I recently upgraded my 2011 Macbook Air to a 2018 Macbook Pro running macOS Mojave and I must say it runs splendidly. Here are a few things I install on a upon a new installation to become comfortable.

Graphical apps

  • Firefox is the default browser for me on macOS, despite it always “using significant energy.”
  • Scriviner is used for authoring of posts offline, which I then copy and re-format into WordPress.
  • I use Xcode extensively to write some relatively simple C programs. I also use the Command Line Tools that come with the installation, although you don’t need to install them together.
  • Screen sharing is a handy VNC client that works better than X forwarding, at least with the default installation.
  • Notes is the preferred note taking app that syncs pretty seamlessly across my phone and computer. Usually I put quick unsorted notes here and later export them to OneNote in a finalised form for reference later.
  • Microsoft Office is the de facto application suite that I’ve been using for well over a decade. It’s required for building decks, long form writing, and handling mail that needs to be shared in a business setting. If I keep the output local to me, then I’ll stick with LaTeX.
  • VLC is the default for watching videos or listening to radio streams, such as the excellent Groove Salad on Soma.FM. I’ve tried using the Radio section on iTunes but it just seems to be too buggy compared to a bespoke .pls playlist file.

Terminal apps

I use a variety of console apps – some of which aren’t available on macOS by default. Xocite.com has a pretty good tutorial for how to get started on setting up local apps on Mac.


Screen is my favourite terminal multiplexer. Here is the config I use.

# Interaction
escape ``
screen -t default 1 bash
bindkey "^[[1;5D" prev # ctrl-left
bindkey "^[[1;5C" next # ctrl-right

# Behavior
term screen-256color
startup_message off
vbell off
altscreen on
defscrollback 30000


An improved version of the Vi editor, according to the documentation. For me, it’s my primary text editor on the console. The configuration below goes on every user account. As you can tell, I like my tabs two characters wide, with spaces. I don’t use any plugins.

filetype on
filetype plugin on
filetype plugin indent on

syntax enable
set ttyfast

set tabstop=2
set shiftwidth=2
set softtabstop=2
set smarttab
set expandtab
set autoindent
set smartindent
set cursorline
set nobackup
set nowritebackup
set nocompatible
set noswapfile
set backspace=indent,eol,start

set secure
set enc=utf-8
set fenc=utf-8
set termencoding=utf-8


I use the excellent LaTeX for authoring of my resume and diagramming on my blog posts. I primarily use MacTeX for now but would like to switch to something that doesn’t need root privileges – essentially just a package I can extract in any directory and start using.

Keyboard shortcuts

Yes, believe it or not, knowing your keyboard shortcuts goes a long way on macOS, especially when having the laptop docked. I primarily use the window management keyboard shortcuts built-in.

  • ⌃↑: Mission Control, also bound to top mouse button
  • ⌃↓: Application window, also bound to bottom side mouse button
  • ⌃←: Move left a space
  • ⌃→: Move right a space
Book Review: The Courage To Be Disliked
3 August 2018

This is part of a monthly series where I give a brief summary of books I’ve read. These should serve as a handy reference when memory of the book fades.

The Courage to be Disliked is a book by two Japanese authors: Ichiro Kishimi and Fumitake Koga about the philosophy of Alfred Adler. The book is presented as a discussion between a philosopher and a young student. This format works well and makes the book easy to read and follow. Some of the tenants presented inside are certainly difficult to comprehend (e.g. your community consists of the entire universe) but after some reflection they make sense within the narrative of the book.

Here are my key takeaways based on my notes from the reading. Writing these notes doesn’t necessarily mean I agree with everything written. If I would say there was one overarching principle, it would be “live life like you are dancing earnestly.”

  • 3 tenants of Adler’s philosophy: People can change; the world is simple and life is simple too; everyone can be happy
  • Live in the present: focusing on the past causes a deterministic way of thinking
    • Example: trauma doesn’t exist. People act according to goals they have set
  • Choosing your lifestyle is your choice, no one else’s
  • All problems are interpersonal relationship problems
  • Seeing everyone as comrades removes the need for competition and allows you to experience their happiness as your own
  • You can choose your lifestyle again if you’d like. Life lies are convenient axioms that we blame for our lifestyle
  • Changing a lifestyle generates anxiety so we need courage to do so
  • Everything you have done up to this point should have no bearing on how you live moving forward
  • Use what you have; don’t blame it (lack of happiness) on what you don’t
  • Recognition is a fallacy
  • Separation of tasks solves most problems: understand which tasks are yours and which are theirs
  • Happiness is a feeling of community
  • Be happy that people exist; not that they did something
  • Trust is reciprocal and binding; confidence is one-way and liberating
  • The guiding star in life is contribution to others
  • Life in general has no meaning: don’t treat life as a line, treat life as a series of dots – a series of moments

I would recommend this book as a weekend read for anyone who would like an alternative take on happiness. The table of contents is exhaustive and gives you a good profile of the book.

Built with Wordpress and Vim
© 2008 to 2021 Jiff Slater