Categories
Deployment DevOps Docker Optimization

Migrating this blog to self-hosted again

4 years ago I migrated this blog to Hostinger from a self-hosted docker instance. With the 48 month plan ending in 4 days, I went back to self-hosted once again.

Why?

Mainly cost 💸. The price has went up >230% since the last time I paid for it. It’s the difference between paying $44.16 vs $104.62 (after discounts) for 48 months of hosting. For something that I barely use, or have barely any traffic in, there’s little to no incentive for me to pay ~$2.18 usd/month for this blog.

CurrentRenewed
Yearly (usd)$44.16$104.62
Monthly (usd)$0.92$2.18

“Well surely self-hosted can’t be free right?”

You’re right, it isn’t “free” per se, but because I have a home lab server running anyway, I might as well use the spare capacity to host the blog. (again, the home lab is something I should write about, hopefully next week)

It took me about an hour to fully migrate over, it was a smooth process with a tiny bit of pain (self-inflicted carelessness).

The home lab is a mini pc, with a measly Intel N100 CPU, along with 16GB of ram and and 500GB SSD. I am shocked to find out how many services it can host comfortably, it has completely changed my view on what’s possible with these small machines.

This blog is hosted on docker as expected. But it’s a Docker container, inside of a Ubuntu VM, inside of a Proxmox host. The idle stats are pretty decent, consuming about 1GB of RAM.

NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O      
wordpress       0.01%     355.5MiB / 7.752GiB   4.48%     4.12GB / 288MB    75.8MB / 2.25GB
wordpress-db    0.71%     533.6MiB / 7.752GiB   6.72%     78.8MB / 3.6GB    3.45MB / 1.87GB

To make this work we employ the usual caching strategies and pre-load the pages to make sure that they are already cached on server and ready to go.

  • WordPress: Some kind of caching plugin, e.g. WP-Optimize + Jetpack
  • CDN: Cloudflare

While setting this up I also found out that there is Redis object caching for WordPress, but it seems like it’s only useful if my site has many reads from the database. Based on my gut feel, I doubt it, so I’m omitting Redis until the day when this setup cannot handle it anymore.

All things considered, pretty good performance!

lighthouse report from chrome: 99 performance

Of course, I’ve no idea how this’ll perform under load but given that there’s barely any dynamic content on this site, it’s unlikely that this setup will buckle under any typical loads.

To expose this blog over the internet, it’s done with Cloudflare Tunnel, which saves me the typical hassle of securing the connection to my server with origin certificates.

illustration from: https://blog.cloudflare.com/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic

It’s secure and easy to setup, would recommend anyone who wants to host public services with it. There is one major caveat: Cloudflare would be able to see all traffic between your origin server and Cloudflare, so you have to trust Cloudflare. Honestly, it’s kind of inevitable that you have to place your trust on someone or something. Given their track record of transparency when there are downtime or when shit hits the fan, they’ve earned my trust.

While I was aiming for zero-downtime, unfortunately, there was about 10 minutes of downtime.

the importance of uptime monitoring, which I’m planning to self-host in the near future

I had my new site up and running and it was a simple DNS cutover. Unfortunately, I forgot to take into account DNS propagation time, and clients that got the old IP ended up not being able to reach the site. To be honest I still don’t understand why it failed because it should still show the old site and seamlessly switch over when the new DNS kicks in. Let me know in the comments if you have any ideas!

Summary 📖

Thanks to the beauty of virtualisation, I’ve saved myself $104.62 usd over 4 years. If this mini pc server lasts anywhere as long as that, it would’ve paid for itself plus interest (including the other services that it’s hosting).

Now., on to figuring out an automated backup solution…

Categories
Deployment DevOps Docker Learning Productivity

Miniflux: self-hosted RSS reader

In an attempt to stay more updated with the things that are happening online, I’ve recently started following the top stories on Hackernews via the Telegram channel. But I’ve very quickly realized that it is just not part of my routine to check news via telegram.

What about RSS readers? I remember using Google Reader donkey’s years ago before it was abruptly shut down and I never did get back to RSS readers from then on; probably something to do with the trauma of losing all my news feed suddenly without a good alternative.

In my search for something that just “works”, Dickson hooked me up again with another recommendation that does exactly what I ask for: works.

TLDR; it’s a very simple and opinionated RSS reader that has a self-hosted option.

Setup

It was so simple that I got the docker up and running on my Synology NAS within minutes. Here’s the docker-compose.yml file that I used to get up and running. Docs on configurable parameters

Categories
Deployment Productivity Thoughts

Setting up WebDav

In my pursuit of Building a second brain, I hit a blocker fairly early on: Obsidian doesn’t support mobile applications currently. This makes the experience rather disjointed as I’m not able to build on it when I’m not sitting down at a computer, or that I can’t refer to my notes when I need to. In order to remedy this, I searched high and low for an application that has [[url handing]] as a feature and finally stumbled upon 1Writer as a “good enough” solution when I was finding more uses for my first ipad.

In order to sync my notes across multiple devices, I either need to pay $10/month for Dropbox, pay for some other service, or find a self-hosted option.

(Pretty pissed when Dropbox decided to limit the number of devices that can sync with Dropbox else I wouldn’t have to spend so much time on this)

Choices 💭

Of course, being Asian, I went with the free option of self-hosting. The protocol of choice was Webdav because it’s the one that 1Writer supported, and I’ve had some experience with it in the past.

Categories
Deployment DevOps Learning Weekly

Weekly: Migration

The past week has been extremely exciting and nerve-wrecking. My team has finally completed the migration from on-premise to the cloud. It’s the first time that I’ve done anything like this and I’m blessed to have someone senior to lead us through the migration period.

ps: I wrote but forgot to post so this was actually 2-3 weeks ago

I’m a part of the MyCareersFutureSG team, so our users are the working population of Singapore, and we host hundreds of thousands of job postings, so there are definitely some challenge in migrating the data.

It’s the first time that I’ve handled such huge amounts of data when migrating across platform and the validation and verification process is really scary, especially when we couldn’t get the two checksum to match. It’s also the first time that I’ve done multiple Kubernetes cluster base image upgrade rollover. There were multiple occasions where we were scared that the cluster will completely crash but it managed to survive the transition.

Let me sum up the things I’ve learnt over the migration.

  • When faced with large amount of data, divide and conquer. Split data into smaller subsets so that you have enough resource to compute.
  • When rolling nodes, having two separate auto scaling groups will allow you to test the new image before rolling every single node.
  • If you want to tweak the ASG itself, detach all the nodes first so that you will have an “unmanaged” cluster, then no matter what you do to the existing ASG, at least your cluster will still stay up.
  • When your database tells you that the checksum doesn’t match, make sure that when you dump the data, it’s in the right collation, or right encoding format
  • Point your error pages at a static provider like S3, because if you point it at some live resource, there’s a chance that a mis-configuration will show an ugly 503 message. (something that happened briefly for us)
  • Data less than 100GB is somewhat reasonable to migrate over the internet these days
  • Running checksum hash on thousands and thousands files is quite computationally and memory intensive, provision enough resources for it.

Overall, the migration actually went over quite well and we completed ahead of time. Of course, the testing afterwards is where we find bugs that we have never found before because it’s the first time in years that so many eyes are on the system at the same time.

The smoothness is also thanks to the team who has carefully planned the steps required to migrate the data over, as well as setup streaming backups to the new infrastructure so that half of the data is already in place and we just need to verify that the streamed data is bit perfect.

Since it’s been a couple of weeks since this happened, I realize that I am lucky to be blessed with the opportunity to do something like this. Cause I’ve just caught up with my friends and most of the times, their job scopes don’t really allow them to do something that far out of scope. Which… depending on your stage of life it could be viewed as a pro/con. I’m definitely viewing this 4 day migration effort over a public holiday weekend as positive cause it’s something not everyone can experience so early on in their career!

Categories
Deployment Optimization Weekly

Weekly: optimize everything!

Well a bunch of things happened this week but I think the general theme is to optimize everything. It’s just something that I do from time to time cause gaining efficiency pleases my soul (like the cost efficiency from switching hosting provider).

Speeding up my zsh shell launch

I was feeling like my shell (zsh) launches have been getting slower and slower over time with additional plugins and packages to make my life better. But my workflow revolves a lot around the shell, so the waiting was starting to bother me.

TLDR; I managed to reduce the loading times from 1.xx seconds to 0.2x seconds.

The improvement was constant across various devices, some actually took more than 2 seconds cause of all the helper plugins I was using. But on average it was 5 times faster.

You can use this command to benchmark your shell speed.

for i in $(seq 1 10); do /usr/bin/time zsh -i -c exit; done

personal laptop: before optimisations
personal laptop: after optimisations

There was definitely a noticeable speed up when I open up a new tab. And it made my 5 year old laptop feel way faster than before.

What did I do?

Categories
Deployment Docker

New server setup for 2019

First post of 2019, it’s time to dive into what and how I have my server configured. Well technically it was configured in 2018, but it took awhile to type this out.

There are 3 main services that I want to run, however in total I’m running a total of 6 docker instances on my DigitalOcean server.

It was rather smooth sailing, apart from the disaster that broke out right before I wanted to migrate this WordPress blog. Most of the Docker images that I’ve used came from linuxserver.io, they provide really good and clean images that have been used by millions of people. (I’m just too lazy to build my own images)

Categories
Deployment Docker Learning

When this site crashed

My WordPress blog crashed when I tried updating it to the new Version 5.0. I swear I could hear a woman screaming in the background when I realized that everything stopped working. It also turned out to be way more difficult to recover than expected because I was running multi-site on it.

Gutenberg simply crashed everything

Basically, the new version of WordPress refused to play nice with the presumably outdated version of Gutenberg I have running on my semi-neglected blog. It crashed everything, including my other private sites I have running on this installation.

Categories
Deployment Learning

Introduction to Terraform

As I’m going to be interviewing for a job that works on this platform, I’ve decided to read up on what Terraform is (as well as Nomad, but that’s for another post). This is actually something I’ve wanted to explore for awhile now, but haven’t had a good reason to get into it yet.

Terraform is very similar to AWS’s CloudFormation, which is basically Infrastructure as Code. However, the main advantage is that Terraform is platform agnostic, which means that I am able to use the same tool to deploy on multiple, different cloud providers. The catch however, is that the code is obviously different because services are different on different providers. Terraform basically just provides the platform/tool to manage it all in one place, but it’s way more powerful than that.

After watching a few talks and going through some documentation, the concept makes a lot of sense, and it might actually be pretty easy for me to deploy my own projects through this. Even though it would be a little pointless to use such a tool to deploy a single instance, hah.

I’ve watched two talks about Terraform so far.

I gotta say, my mind is a little overloaded right now. But I am very impressed with how he managed to bring up an entire infrastructure across different platforms and region in <10mins. It’s also amazing that by SSH-ing into the bastion host, he was able to have access to all nodes across different regions, and schedule services to launch on them without switching regions. (thanks to the VPN)

I am excited to see what type of projects I could test deploy with this new tool.

Categories
Deployment Learning

Backup for WordPress

Backups are essential to any systems. Especially if it’s data that cannot be easily downloaded again, like a blog. Even though I should really employ a system-wide backup for my server, I’m still finding the most cost-effective and efficient way of making it happen.

Categories
Deployment Learning Security

Cloudflare CDN

In my attempts to get a valid SSL certificate for this site, I ended up cheating a little and making use of Cloudflare to do the securing for me instead.

Getting it set up was pretty straightforward, though I ran into some issues as I wasn’t familiar with Cloudflare’s infrastructure. I managed to set up a full SSL encryption as shown in the diagram below.

First, point my DNS NS records to Cloudflare, then generate the keypair on Cloudflare, import them into my server then update the Nginx config file to point to those keys. And everything just automagically become secured with TLS just like that. Made a few more optimizations on to minify JS/CSS/HTML as well as enforcing HTTPS for all of my sub-domains. Worked like a freaking charm.

SSL was my main concern when I decided used Cloudflare, but even on the free-tier there is basic protection against DDOS attacks, and my content will be cached closer to any visitors. This provides a nice boost in performance which is noticeable; it also provides a good boost in security, helping my tiny server stay available, just in case.

In the midst of working on this, I ended up optimizing the site at the same time, it should feel a lot more responsive now. In the next post, I’ll write about the tweaks I made to make WordPress run a lot faster.