Keyboards: SA Profile Key Caps

As I slowly sink into the rabbit hole of the world of mechanical keyboards, I finally got my hands on my first set of SA Profile Key Caps from Domikeys [AliExpress].

left: kailh switch testers; right: Domikey ABS Doubleshot keycap set

For those that are not familiar with key cap profiles, here’s a quick primer on them. Key caps comes in various shapes and heights, most common of them are OEM and Cherry Profile which are on most of the pre-built mechanical keyboards.

SA profile as you can see is much higher and has a retro feel/look to it. I have been curious about it cause I think it has a type writer look which tickles my fancy.

I bought black and white because I realize that I didn’t have any basic sets like these. Coincidentally, this goes really well with my Black Drop CTRL keyboard that I’m currently using.

It feels distinctively different

I have tried OEM, Cherry, DSA, XDA keycaps before, but SA is a whole different beast altogether. The main difference for me is that the gaps between the keys are much wider than I’m used to and it feels like my fingers will always fall into the crevices when I’m not too careful with the way that I type.

The other difference is its weight. Because it’s so much taller than the other keys, there is obviously much more material to it, which makes it heavier. But because the keys is heavier, it makes typing feel lighter. It also feels sturdier because these are some thiccc bois.

I like it. The keyboard definitely has a nicer thock sound with SA key caps.

What’s next?

I have two GMK key caps set shipping in Oct and Dec. And I am extremely excited for them.

  • GMK Blue Samurai
  • GMK Mito Laser

My main motivation for getting this set is because I didn’t like how cheap the PBT key caps that came with the CTRL keyboard feels. I’ve swapped them out with another cheap set that I have but it only improved the feel ever so slightly.

Have also placed an order for the Keychron K6 (hot-swap version) which should be arriving this week as well. This would be purely for experimenting different switches and possibly be my portable keyboard when I feel like working outside.

Down and down the rabbit hole I go.

DevOps Learning Weekly

Weekly: Microsoft Azure

Took an online introductory course (Udemy) on Microsoft Azure AZ-900 because lo-and-behold, my team has chosen the Azure platform for our translation services (will write more about this next time).

As someone who has been 99.99% working on the AWS platform and Linux systems in general, Azure feels pretty foreign because most of the concepts seem to tie into the Windows systems more so than anything else.

  • Access control? Active Directory
  • RBAC? Active Directory
  • Networking? Virtual networks
  • Pricing? Subscriptions
  • Compliance? Almost everything under the roof

The main difference I find between AWS and Azure is that: AWS is a loose collection of services that are “grouped” through networking, Azure is a logical collection of services that are “grouped” by “folders” of resources.

Productivity Thoughts Weekly

Weekly: Organizing to chaotic information

The title sounds grander than this really is. It was one of those work days where I felt like I didn’t get much done. I checked my calendar and there wasn’t many meetings, only the one in the morning. It felt like a really busy day but I couldn’t think of a concrete task that I have accomplished that day.

As I lay in my bed, tossing and turning, being unable to sleep, I figured out why I couldn’t get my tasks done for the day, and came up with a simple workflow that would solve this.

Why I wasn’t able to work on my tasks

A day in the life of a software/devops engineer is pretty chaotic. You have various information requiring different context streaming in from multiple sources throughout the day. For example, I was working on updating some configuration mapping on Kubernetes for our new SES SMTP Relay credentials. Then I get a message clarifying about a story that I completed yesterday, about a backend API written in GO. Then I had to join a meeting about decoupling our entire platform from an external service that many of our logic is intertwined with.

Learning Thoughts Weekly

Weekly: Building a second brain part 1

This came about because of something I discovered recently about building a second brain. The prospect of it is extremely enticing for me.

Idea is that over time you build a second brain that is like a digital collection of all the knowledge that you’ve gained over your lifetime.

As someone working in the digital field, the amount of information that I go through on a daily basis is pretty huge. I’ve been taking notes for a million and one things, but I realize that I’ve almost never really gone through my notes and make something out of it. Which I felt has been really wasteful because, why would I even write them in the first place if I’m not going to use it? How many % of the things I’ve written can I actually remember in my dumb human brain?

Armed with the motivation to build a digital brain that I can tap into for creating new ideas and products, I embarked on part 1 of the journey.

Finding the right tool

The “original” tool (that I know of) is known as Roam Research, however, it’s a web only tool currently, and it’s a paid service of $15/month. This makes it slightly undesirable as I would prefer if it’s something that I could potentially migrate/export out of the system. I also wish that there exist a free option that I can try out to see if this second brain business is something that I really want.

I checked out 8 different not taking tools to see what works for me and compare across them.

Development Thoughts

Gitlab MR Bot

I’ve recently write a simple merge request “bot” for my team. To be honest it’s more of a glorified reminder but hey, it works! I’ve done a slightly more technical write up at

Summary: It’s a bot that will collate all the open Merge Requests we have on our private and public Gitlab repositories, that have the Review Me label, but don’t have at least 2 reviewer labels yet, and send it to our Slack channel.

It’s been running since 3rd August, and in this post, I just want to note down my observations for the past 20 days.

The Positives

More people are actively taking on the role of reviewing the MRs. Before the bot our scrum master has to manually collect them and bring awareness to the team that there are MRs that needs people to take ownership of. But for the past 3 weeks, it seemed to have improved. Most of the times when someone ask for reviews on their MR, we usually have 2 people who take them up before the next reminder*.

*the reminders are set to run at 11am and 3pm (4 hours apart)

The team seemed to be quite receptive of the bot reminding them about the open MRs, the little easter eggs of encouragement seems to help with the team morale from time to time as well.

The Negatives

Hard to quantify, but the time taken for an MR to get approved seems to have shortened, which may indicate that people are more eager to approve and may not review as thoroughly. Case in point, I approved one MR that contains relatively inefficient code, that was pointed out my my colleague when she was working on that section.

People are taking on the role of reviewing but sometimes it slips under their radar and forget to approve even when all the issues have been resolved. I guess this is something that the bot can improve on but don’t really have a clear idea of what can be done about it yet.


I’m glad that I wrote it. It didn’t really take a long time, just some inspiration and inconvenience that prompted it. I am carefully considering adding metrics for how long an MR stays open for and all that. But I feel like this might create unwanted attention into our work so it’s something that I haven’t explore deeply into yet. But I think it would be pretty interesting!

Deployment DevOps Learning Weekly

Weekly: Migration

The past week has been extremely exciting and nerve-wrecking. My team has finally completed the migration from on-premise to the cloud. It’s the first time that I’ve done anything like this and I’m blessed to have someone senior to lead us through the migration period.

ps: I wrote but forgot to post so this was actually 2-3 weeks ago

I’m a part of the MyCareersFutureSG team, so our users are the working population of Singapore, and we host hundreds of thousands of job postings, so there are definitely some challenge in migrating the data.

It’s the first time that I’ve handled such huge amounts of data when migrating across platform and the validation and verification process is really scary, especially when we couldn’t get the two checksum to match. It’s also the first time that I’ve done multiple Kubernetes cluster base image upgrade rollover. There were multiple occasions where we were scared that the cluster will completely crash but it managed to survive the transition.

Let me sum up the things I’ve learnt over the migration.

  • When faced with large amount of data, divide and conquer. Split data into smaller subsets so that you have enough resource to compute.
  • When rolling nodes, having two separate auto scaling groups will allow you to test the new image before rolling every single node.
  • If you want to tweak the ASG itself, detach all the nodes first so that you will have an “unmanaged” cluster, then no matter what you do to the existing ASG, at least your cluster will still stay up.
  • When your database tells you that the checksum doesn’t match, make sure that when you dump the data, it’s in the right collation, or right encoding format
  • Point your error pages at a static provider like S3, because if you point it at some live resource, there’s a chance that a mis-configuration will show an ugly 503 message. (something that happened briefly for us)
  • Data less than 100GB is somewhat reasonable to migrate over the internet these days
  • Running checksum hash on thousands and thousands files is quite computationally and memory intensive, provision enough resources for it.

Overall, the migration actually went over quite well and we completed ahead of time. Of course, the testing afterwards is where we find bugs that we have never found before because it’s the first time in years that so many eyes are on the system at the same time.

The smoothness is also thanks to the team who has carefully planned the steps required to migrate the data over, as well as setup streaming backups to the new infrastructure so that half of the data is already in place and we just need to verify that the streamed data is bit perfect.

Since it’s been a couple of weeks since this happened, I realize that I am lucky to be blessed with the opportunity to do something like this. Cause I’ve just caught up with my friends and most of the times, their job scopes don’t really allow them to do something that far out of scope. Which… depending on your stage of life it could be viewed as a pro/con. I’m definitely viewing this 4 day migration effort over a public holiday weekend as positive cause it’s something not everyone can experience so early on in their career!

Keyboard Weekly

Weekly: Drop CTRL Keyboard

I think I missed out two weeks of entry because well… more discipline is needed when writing. However, it has been a really good two weeks because a lot of my purchases has come. One of the most notable one is the Drop CTRL Keyboard. a TKL keyboard that I’ve had my eyes on ever since it launched but couldn’t justify the purchase back then.

The version I got had a black aluminium case, with Halo True switches. This is my first experience with a more “premium” switch that isn’t Cherry MX or Gateron. It is also much heavier than I’m used to at 60g actuation force.

It felt way heavier than I’d liked at the start, but I’ve gotten used to it over the couple of weeks using it, and I’ve really gotten to liking how it feels. The tactile bump is much more pronounced than anything that I’ve tried before, and the very high force required for me to bottom out means that I rarely bottom out they keys which results in a quieter typing experience overall.

I’ve disassembled the keyboard, and lubed every single one of the switches with Krytox 205g0, also clipped, lubed and bandaid moded the stabilizers. All in all, it feels amazing and I never want to go back to using a keyboard that isn’t lubed like this anymore. The unfortunate part was that when I was lubing the stabilizers, I couldn’t get my hands on some thicker grease which would help with the dampening a little more. That has been rectified since.

I’m starting to build up my mechanical keyboard collection as I dive more into this hobby.

  • Krytox 205g0
  • Krytox 105
  • Superlube dielectric grease (PTFE)
  • 20 x Durock T1 switches
  • 10 x Durock Koala switches
  • Switch opener
  • Stem picker (4 prong)

Also bought two custom keycaps sets waiting for them to ship in a couple of months.

I’m extremely excited for GMK Mito Laster keycaps but I think it would only arrive next year, gotta keep my expectations in check.

All of these has made me realize that I really enjoy this hobby and I think I will consider getting more premium cases and boards next year. I am extremely curious about how it feels to use a keyboard with brass plate or carbon fiber plate.

Learning Thoughts Weekly

Weekly: Google Analytics and building habits

Well, skipping the things that I had to do, one of the fun things that I’ve been exploring is Google Analytics. I’ve heard so much about it, and we actually used it in my current team (just that I haven’t really worked on this portion yet).

Went through the GA For Beginners course and it actually gives me a nice little certificate of completion. So that’s nice. I’m bringing this up because I want to experiment with it, which means that I’ve integrated it with this blog, as well as my landing page. Hopefully I can get some kind of metric at the end of the month. Unless the visitors of my sites are all bots, which would be a little disheartening.

Development Learning Weekly

Weekly: MySQL benchmarking

Been busy with work and life that did not have the time to explore new things. Or maybe I did just that I forgot. Either way, the plan for the weekends is to explore Caddy as an automated way for me to deploy my portfolio/landing page, either that or cheating and using Netlify instead. The current flow I’m using relies on Ansible to deploy the page, which is a little bit manual in a sense. Hoping to change that.

We started doing benchmarking on our DB because one of our search queries has been slowing down significantly lately, and it’s affecting our user experience. In order to optimize the performance, we need a way of measuring the changes that we were going to implement.

Development DevOps Weekly

Weekly: building CICD pipelines

The past week has been spent trying to build a centralized Gitlab CICD repository for all services to bootstrap and standardize on.

I’m happy to announce that it has been open sourced!

What’s a centralized CI? It’s basically a template repository for CI pipelines. In this case, it’s for Gitlab because I’m familiar with it and it’s what I’m working with day in day out.

This idea started with my previous project team, but is slowly maturing as I figure out the various cases that it might be used/useful and tweak it accordingly. What it has currently is more of a MVP and POC that it can be used across various projects on Gitlab. You know that because the versioning currently only support patch and not minor/major bumps. It has something to do with how my current team does versioning but it’s the top of my list for things to improve.

Currently there are 4 repositories relying on the CCI, 2 of which are external but still within my control. Features will be incrementally added onto it, and I hope that this could really be something that would help people reduce the amount of time/complexity to build pipelines.