We have for years desired a self-hosted solution to unify our fragmented data collections in one centralized location. This was the major impetus behind wtfs, for example. Ideally, we wanted a system that was:
We happened upon our current solution quite by accident while reading 2600. In an article titled “5G Hotspots and Tinc”1), the author explains that after switching from traditional internet service to a T-Mobile hotspot, they lost access to their home Nextcloud server. They go on to describe how they were able to restore access thanks to tinc. Having never heard of either application before, this piqued our interest. A short burst of research convinced us that this was the solution we sought.
We began the deployment with tinc since we, like the 2600 author, had no control over port forwarding at the installation location. Unfortunately, tinc's configuration parameters had changed in the year and change since the article's publication.
By far the most painful part of setting tinc up is distributing the key files. Every node on the network needs to have the public key of every other node, even if they're connecting to a central hub as in our case2).
As has been the trend with our recent server operation, we opted to install Nextcloud within Docker using the instructions at nextcloud/docker. In the initial installation, we neglected to enable a persistent volume for the Postgres instance; when we later ran docker-compose down
, we lost the entirety of our uploads, plugins, and settings to that point.
Since the server itself is located on our home LAN, we wanted to be able to access it using a local address whenever possible. Nextcloud has a config option called overwritecondaddr
that tells it when to overwrite the requested URL. If the page request comes from an address matching the regex in overwritecondaddr
, then the server replaces the URL with overwritehost
. Unfortunately, the code which performs this check contains a bug that has inexplicably persisted since at least 2017. A relatively minor change to lib/private/AppFramework/Http/Request.php
is all that is needed to fix the issue3).
The Nextcloud admin guide's page on NGINX usage4) looks considerably more daunting that the actual process ended up being. The relevant section of our NGINX server definition is included below. This has proven sufficient for even the CalDAV/CardDAV syncing. We also increased client_max_body_size
to 100 M for faster uploads.
# replace www.example.com throughout with the actual domain
server {
server_name www.example.com;
access_log /var/log/nginx/nextcloud.log;
error_log /var/log/nginx/nextcloud.error.log debug;
client_max_body_size 100M;
# redirect http connections to https
error_page 497 =301 https://www.example.com:8080$request_uri;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP 10.0.0.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://10.0.0.4:8080;
proxy_read_timeout 90;
}
listen [::]:8080 ssl ipv6only=on;
listen 8080 ssl;
}
Once configured properly, operation has been very smooth. The user experience is exactly what one would expect from a cloud storage service. Media streaming worked right out of the box, and calendar syncing to Android5) was easy enough to set up. Navigation (including local access) does come with a noticeable lag, most likely due to hardware limitations. On the whole, however, we are very satisfied with the final product.
As of this writing, our Docker data is all stored on a single external hard drive mounted at /mnt/nextcloud
. We believe we will be able to add additional storage space by union mounting however many disks we need over that same directory using something like MergerFS, but more research is required.