In my pursuit of Building a second brain, I hit a blocker fairly early on: Obsidian doesn’t support mobile applications currently. This makes the experience rather disjointed as I’m not able to build on it when I’m not sitting down at a computer, or that I can’t refer to my notes when I need to. In order to remedy this, I searched high and low for an application that has [[url handing]] as a feature and finally stumbled upon 1Writer as a “good enough” solution when I was finding more uses for my first ipad.
In order to sync my notes across multiple devices, I either need to pay $10/month for Dropbox, pay for some other service, or find a self-hosted option.
(Pretty pissed when Dropbox decided to limit the number of devices that can sync with Dropbox else I wouldn’t have to spend so much time on this)
Of course, being Asian, I went with the free option of self-hosting. The protocol of choice was Webdav because it’s the one that 1Writer supported, and I’ve had some experience with it in the past.
The past week has been extremely exciting and nerve-wrecking. My team has finally completed the migration from on-premise to the cloud. It’s the first time that I’ve done anything like this and I’m blessed to have someone senior to lead us through the migration period.
ps: I wrote but forgot to post so this was actually 2-3 weeks ago
I’m a part of the MyCareersFutureSG team, so our users are the working population of Singapore, and we host hundreds of thousands of job postings, so there are definitely some challenge in migrating the data.
It’s the first time that I’ve handled such huge amounts of data when migrating across platform and the validation and verification process is really scary, especially when we couldn’t get the two checksum to match. It’s also the first time that I’ve done multiple Kubernetes cluster base image upgrade rollover. There were multiple occasions where we were scared that the cluster will completely crash but it managed to survive the transition.
Let me sum up the things I’ve learnt over the migration.
When faced with large amount of data, divide and conquer. Split data into smaller subsets so that you have enough resource to compute.
When rolling nodes, having two separate auto scaling groups will allow you to test the new image before rolling every single node.
If you want to tweak the ASG itself, detach all the nodes first so that you will have an “unmanaged” cluster, then no matter what you do to the existing ASG, at least your cluster will still stay up.
When your database tells you that the checksum doesn’t match, make sure that when you dump the data, it’s in the right collation, or right encoding format
Point your error pages at a static provider like S3, because if you point it at some live resource, there’s a chance that a mis-configuration will show an ugly 503 message. (something that happened briefly for us)
Data less than 100GB is somewhat reasonable to migrate over the internet these days
Running checksum hash on thousands and thousands files is quite computationally and memory intensive, provision enough resources for it.
Overall, the migration actually went over quite well and we completed ahead of time. Of course, the testing afterwards is where we find bugs that we have never found before because it’s the first time in years that so many eyes are on the system at the same time.
The smoothness is also thanks to the team who has carefully planned the steps required to migrate the data over, as well as setup streaming backups to the new infrastructure so that half of the data is already in place and we just need to verify that the streamed data is bit perfect.
Since it’s been a couple of weeks since this happened, I realize that I am lucky to be blessed with the opportunity to do something like this. Cause I’ve just caught up with my friends and most of the times, their job scopes don’t really allow them to do something that far out of scope. Which… depending on your stage of life it could be viewed as a pro/con. I’m definitely viewing this 4 day migration effort over a public holiday weekend as positive cause it’s something not everyone can experience so early on in their career!
Well a bunch of things happened this week but I think the general theme is to optimize everything. It’s just something that I do from time to time cause gaining efficiency pleases my soul (like the cost efficiency from switching hosting provider).
Speeding up my zsh shell launch
I was feeling like my shell (zsh) launches have been getting slower and slower over time with additional plugins and packages to make my life better. But my workflow revolves a lot around the shell, so the waiting was starting to bother me.
TLDR; I managed to reduce the loading times from 1.xx seconds to 0.2x seconds.
The improvement was constant across various devices, some actually took more than 2 seconds cause of all the helper plugins I was using. But on average it was 5 times faster.
You can use this command to benchmark your shell speed.
for i in $(seq 1 10); do /usr/bin/time zsh -i -c exit; done
There was definitely a noticeable speed up when I open up a new tab. And it made my 5 year old laptop feel way faster than before.
First post of 2019, it’s time to dive into what and how I have my server configured. Well technically it was configured in 2018, but it took awhile to type this out.
There are 3 main services that I want to run, however in total I’m running a total of 6 docker instances on my DigitalOcean server.
It was rather smooth sailing, apart from the disaster that broke out right before I wanted to migrate this WordPress blog. Most of the Docker images that I’ve used came from linuxserver.io, they provide really good and clean images that have been used by millions of people. (I’m just too lazy to build my own images)
My WordPress blog crashed when I tried updating it to the new Version 5.0. I swear I could hear a woman screaming in the background when I realized that everything stopped working. It also turned out to be way more difficult to recover than expected because I was running multi-site on it.
Gutenberg simply crashed everything
Basically, the new version of WordPress refused to play nice with the presumably outdated version of Gutenberg I have running on my semi-neglected blog. It crashed everything, including my other private sites I have running on this installation.
As I’m going to be interviewing for a job that works on this platform, I’ve decided to read up on what Terraform is (as well as Nomad, but that’s for another post). This is actually something I’ve wanted to explore for awhile now, but haven’t had a good reason to get into it yet.
Terraform is very similar to AWS’s CloudFormation, which is basically Infrastructure as Code. However, the main advantage is that Terraform is platform agnostic, which means that I am able to use the same tool to deploy on multiple, different cloud providers. The catch however, is that the code is obviously different because services are different on different providers. Terraform basically just provides the platform/tool to manage it all in one place, but it’s way more powerful than that.
After watching a few talks and going through some documentation, the concept makes a lot of sense, and it might actually be pretty easy for me to deploy my own projects through this. Even though it would be a little pointless to use such a tool to deploy a single instance, hah.
I gotta say, my mind is a little overloaded right now. But I am very impressed with how he managed to bring up an entire infrastructure across different platforms and region in <10mins. It’s also amazing that by SSH-ing into the bastion host, he was able to have access to all nodes across different regions, and schedule services to launch on them without switching regions. (thanks to the VPN)
I am excited to see what type of projects I could test deploy with this new tool.
Backups are essential to any systems. Especially if it’s data that cannot be easily downloaded again, like a blog. Even though I should really employ a system-wide backup for my server, I’m still finding the most cost-effective and efficient way of making it happen.
In my attempts to get a valid SSL certificate for this site, I ended up cheating a little and making use of Cloudflare to do the securing for me instead.
Getting it set up was pretty straightforward, though I ran into some issues as I wasn’t familiar with Cloudflare’s infrastructure. I managed to set up a full SSL encryption as shown in the diagram below.
First, point my DNS NS records to Cloudflare, then generate the keypair on Cloudflare, import them into my server then update the Nginx config file to point to those keys. And everything just automagically become secured with TLS just like that. Made a few more optimizations on to minify JS/CSS/HTML as well as enforcing HTTPS for all of my sub-domains. Worked like a freaking charm.
SSL was my main concern when I decided used Cloudflare, but even on the free-tier there is basic protection against DDOS attacks, and my content will be cached closer to any visitors. This provides a nice boost in performance which is noticeable; it also provides a good boost in security, helping my tiny server stay available, just in case.
In the midst of working on this, I ended up optimizing the site at the same time, it should feel a lot more responsive now. In the next post, I’ll write about the tweaks I made to make WordPress run a lot faster.
I was sick of downloading my shows manually, it actually takes up quite a bit of time especially if you add them up over the years. Before I had my server set up, I was running Deluge with the YaRSS2 plugin which works wonderfully well as long as my computer was turned on. (kind of a power hog)
So… wow, I finally managed to get it all up and running. The amount of effort is way more than I would’ve liked but at least it’s done now. There’s a ton of things I would like to write about, especially the troubleshooting steps I did so that it’ll be easier to migrate this configuration in the future.
First of all, I tried on my own to get the subdomain routing working with jwilder/Nginx-proxy along with MariaDB and official WordPress image.
I ended up not doing the docker-compose method because I was trying to troubleshoot why I wasn’t able to obtain an SSL certificate from Let’s Encrypt. Bad news, SSL still isn’t working yet but while I was debugging it I hit the rate limit for the number of certificates I could request for in an hour/day/week. Hopefully when that’s sorted out this site will have a proper SSL certificate.
I wanted to have the ability to host multiple WordPress sites, for my own testing/development as well as for my freelance work. Instead of running a separate new WordPress installation every time I need a new site, multi-site allows me to run multiple sites off a single installation and manage them through a centralized zone.
There are two ways of running this.
The reason for choosing sub-directory was pretty easy for me.
There is no need for pretty URLs eg. xyz.lordofgeeks.com for the sites I’m hosting
Let’s Encrypt doesn’t offer wildcard certificates where 1 certificate can cover all sub-domains under *.lordofgeeks.com
It makes sense that all of the sites belong to blog.lordofgeeks.com/[name-of-site]
For point 2, starting from 2018 onwards, Let’s Encrypt will offer wildcard certificates. So all my effort for setting all these up will be for nought, but it’s still a good learning experience.
Everything went on fine until I added a new site blog.lordofgeeks.com/dev/ and tried to upload a file that’s >1 megabyte.