Categories
Deployment Productivity Thoughts

Setting up WebDav

In my pursuit of Building a second brain, I hit a blocker fairly early on: Obsidian doesn’t support mobile applications currently. This makes the experience rather disjointed as I’m not able to build on it when I’m not sitting down at a computer, or that I can’t refer to my notes when I need to. In order to remedy this, I searched high and low for an application that has [[url handing]] as a feature and finally stumbled upon 1Writer as a “good enough” solution when I was finding more uses for my first ipad.

In order to sync my notes across multiple devices, I either need to pay $10/month for Dropbox, pay for some other service, or find a self-hosted option.

(Pretty pissed when Dropbox decided to limit the number of devices that can sync with Dropbox else I wouldn’t have to spend so much time on this)

Choices 💭

Of course, being Asian, I went with the free option of self-hosting. The protocol of choice was Webdav because it’s the one that 1Writer supported, and I’ve had some experience with it in the past.

But I was worried; I remembered that there were security/performance concerns with Webdav as a protocol, which is why it’s not a common protocol that you see and use nowadays. Many minutes of Googling led me to believe that the security is “good enough” as long as I follow all the best practices and limit as much surface area as possible. Performance still sucked but again it’s “good enough” for my use case.

Synology owners will notice that there is already an inbuilt Webdav server, but I strongly advise against using the inbuilt one as you have much lesser control over access, less chances of security updates, and the community recommend just using a VPN instead. I chose against VPN mainly because of convenience; I want to access my notes in every environment (i.e. work computer). But if you don’t have such constraints, use the vpn.

Design 🏗️

I planned to host the Webdav server on my Synology DS918+ NAS (which I haven’t written about yet…) because I wanted to utilize it more. I would not recommend this to most people because you are running an internet facing service on the device that stores all of your important data. For the purposes of this post, trust me I know what I’m doing and hopefully you’ll be convinced at the end of it.

There are a few layers of segregation that we want to consider.

  • Data
  • Network
  • Access
  • Environment/Application

Environment/Application

Docker. That’s it. A container segregates the application/environment from everything else running on the NAS and it’s the simplest, most elegant way of approaching this.

The application (docker image) that I would be using is called SFTPGo, which actually supports many other protocols apart from SFTP/Webdav. Thanks to Dickson (his blog) for this amazing recommendation. Setting up is extremely easy; just follow the instructions and you have it up and running.

I’m kidding.

For some reason, the application refuses to parse the config file properly, the only way for me to configure the various settings is by passing in environment variables to override the default options.

version: "3"
services:
sftpgo:
image: "drakkan/sftpgo:v2-alpine"
# default user id
user: 1026
restart: always
expose:
# HTTP
– "8080"
# HTTPS
– "443"
# WebDav
– "5007"
environment:
# These are the settings to access your db
SFTPGO_WEBDAVD__BINDINGS__0__PORT: 5007
SFTPGO_DATA_PROVIDER__DRIVER: "mysql"
SFTPGO_DATA_PROVIDER__NAME: "sftpgo"
SFTPGO_DATA_PROVIDER__HOST: "mysql"
SFTPGO_DATA_PROVIDER__PORT: 3306
SFTPGO_DATA_PROVIDER__USERNAME: "<SQL_USER>"
SFTPGO_DATA_PROVIDER__PASSWORD: "<SQL_PASS>"
SFTPGO_COMMON_DEFENDER__ENABLED: "true"
SFTPGO_COMMON_DEFENDER__BAN_TIME: 15
SFTPGO_COMMON_DEFENDER__BAN_TIME_INCREMENT: 100
SFTPGO_COMMON_DEFENDER__THRESHOLD: 5
SFTPGO_COMMON_DEFENDER__OBSERVATION_TIME: 15
volumes:
– ./:/srv/sftpgo
mysql:
image: mysql:latest
restart: always
environment:
MYSQL_DATABASE: "sftpgo"
MYSQL_USER: "<SQL_USER>"
MYSQL_PASSWORD: "<SQL_PASS>"
MYSQL_ROOT_PASSWORD: "<SQL_ROOT_PASS>"
volumes:
– ./database:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy-manager_default

Data

We map the data the SFTPGo is going to use from host to container because we want to persist the data even if the container is destroyed.

Since the data is isolated in its own folder and we are not mapping the data from NAS into the container, the data is segregated from the rest of the NAS folders effectively.

Access

Access is governed by the user management system on SFTPGo, this means that we will have a different user purely for accessing WebDav, and it will purely be used for Obsidian. In the worst case scenario that the user is leaked, the worst damage it could do is delete all the files on Obsidian (this disaster is mitigated by periodic snapshots).

Network

There are 2 peculiar sections on my docker-compose.yml file that helps with network security/segregation.

expose:
  # HTTP
  - '8080'
  # HTTPS
  - '443'
  # WebDav
  - '5007'

Instead of using ports to map from container to host, we are using expose instead. This means that no other hosts apart from containers within the same internal network is able to reach SFTPGo. What’s the point and how do you access it in this case?

networks:
  default:
    external:
      name: nginx-proxy-manager_default

We don’t want any data to be able to reach our SFTPGo server without first going through the Nginx Proxy so that the SSL connection is properly terminated.

In this snippet, we attach the network to nginx-proxy-manager_default network, this allows other containers in this network to be able to reach sftpgo on its exposed ports.

Unsurprisingly, the magic service running in this network is the nginx proxy manager. This is a typical reverse nginx proxy, but includes a nice UI so you don’t have to waste your youth figuring out the syntax for the nginx.conf file. This best part about this is that it includes a really easy wizard for getting LetsEncrypt certificates and helps you to auto-renew it as well!

However, since I’m using Cloudflare’s TLS certificate, I had to upload the custom certificates so that I can use its Full (Strict) TLS mode to achieve full end to end encryption for the traffic.

Putting it all together, this is what the network diagram looks like.

Experience 🧪

Once everything is up, it was really easy connecting to it and start using. It’s just a https address that will prompt you for user/pass and it will mount like any other normal directory on your device.

Performance is rather weak as expected. While throughput is passable, uploading many files at once is painfully slow because each file is a new connection has to go through the whole handshaking process. For reference, uploading about 100 small text files took at least 10 minutes. That said, this is not an issue because my human brain can only write on 1 file in any moment.

With that, I have successfully self-hosted the Obsidian notes in a way that can be used by mobile devices. Was the hours I spent on this worth saving the $10/month? You bet it was. 😤

Leave a Reply