After playing a bit with HCL Volt locally on my machine, I've decided it's time to make Volt easily accessible for my other devices. Especially, testing a mobile UI always works better when you can directly try it from different devices.
Luckily, I have bought a Docker enabled Synology NAS earlier this year and I'm already running few services there in a similar way. Now I just needed to make sure that I can run HCL Volt there too. I'll continue to use the domino-docker project because it allows me to create a pre-configured Volt environment. Synology Docker environment is a bit limited and the configuration doesn't allow you to use all the options that you can use directly from the Docker command line, but usually, you can find some workarounds if you need something specific.
My main goals were:
- share the Docker image in my network
- have a HCL Volt server with a publicly trusted certificate
- have it accessible from both inside and outside of my local network
- find a process that would allow me to easily test V12 pre-release docker images
Sharing the Docker image
In my previous post, I've shown how to build the HCL Volt Docker image. If you work on multiple machines, then you'll have to figure out a way how to share the image. One option is to export the image in the same way as HCL is using for the official images. It is good when you want to transfer the image to some unknown infrastructure, but if you want to do more experiments, you may want to host a private Docker registry.
If you enable Docker on Synology, you are just adding the Docker host capability to it, not the ability to share the images using a registry, so you need to install this separately. When I first installed Docker on Synology I was a bit confused by the Registry tab in the configuration. The tab allows you to search in configured external registries.
There are nice tutorials online on how to create a Docker Registry on Synology. I've deployed Docker Registry 2.0 (of course using Docker) to my NAS and I've added a joxit/docker-registry ui web ui for easier management. I've installed them side by side, so the docker-registry-ui just serves the static content. It didn't work when I blindly when for latest tag as I need to use static tag image.
Tutorials you can follow:
- https://www.naraeon.net/en/host-docker-registry-in-synology-the-working-way/
- https://blog.scottchayaa.com/post/2018/08/07/create-docker-registry-services-and-web-ui-on-synology/
With this infrastructure in place, I can push the image to the registry.
If you've configured authentication for your repository (which you should), you need to log in first using docker login command. e.g.:
docker login registry.pris.to
Once you sign in, the credentials are permanently stored in the operating system credentials store (this can be configured in .docker/config.json).
Then you just need to tag the image with repository prefix:
docker tag hclcom/volt registry.pris.to/hclcom/volt
and push it
docker push registry.pris.to/hclcom/volt
If you have a web ui, you can directly check the result (I already had my image there, so it wasn't modified this time)
Few words of warning here ... If you keep experimenting with the image build process and keep pushing updated images to the registry over and over, keep in mind that the images are not being overwritten, they are always added as a new version. It makes sense as the registry doesn't know where the image is used and if the "old" version is not actually needed e.g. in some other image. You need to manually delete them and then work with the garbage collector in the registry to get rid of the blobs.
With your image in the registry, you can use it from different machines and of course, form the Synology too. You must add the registry url and mark it as active.
now you can search for the Volt image and use it
Configuring HCL Volt container
My desired setup is a bit more complex than the default behavior with the automatic generation of a self-signed certificate. I use the Synology built-in reverse proxy and I've configured it to use a wild cart Let's Encrypt certificate. It currently can't be automatically renewed because it needs a DNS record change to verify the domain ownership, but it's a single place where I have this certificate, so I can live with doing this every 3 months manually (and if not, I can probably script the DNS record change later). You can find the procedure here https://vdr.one/how-to-create-a-lets-encrypt-wildcard-certificate-on-a-synology-nas/.
I'm running just pure HTTP on the Domino server and reverse proxy takes care of the HTTPS. This means I can't enable automatic HTTPS redirect which is defined in the standard config.json. I need to supply my own config.
Preparing the data directory and config directory
Synology Docker doesn't work with native Docker volumes, so you normally just prepare a directory on your Synology volume and bind it to the container. In my case, I also need a special configuration file, so I have 2 directories:
- /volume1/docker/volt/config
- /volume1/docker/volt/data
When the data directory is bound to the container, it keeps the Linux permissions. Domino in the container is running with a non-root user notes (uid:1000) and may not be able to create the required subfolders/files if the owner is different. The easiest way to get around the problem for me was to SSH to my nas and just change the ownership to uid 1000. It doesn't matter that the uid is not used by the NAS, unless you want to manage the permissions also form the Synology UI.
Alternatively, you can configure uid of the notes user in the container using DominoUserID environment variable.
Adjusting the config.json
I need to disable the https that is enabled in the provided config.json, but I could not just simply skip the file as there are other options that are required for Volt, e.g. session authentication. I've deleted most of the server document configuration and kept just:
Mapping the volumes
We must tell Docker where our configuration file and data folders are.
I had to move the config.json outside of data directory, because it kept interfering with the folder permissions - in this case, it created the subfolders as root and the notes user could not access the folders again.
Configuring the environment variables
Environment variables are key to the container configuration. Most of them stayed as in the default sample, but there are some significant changes:
- NoSSL - it's my custom flag to skip the certificate generation. You can just ignore it when using the standard build.
- ServerName - must be the same as the name of the container. We are not able to pass the hostname using any Synology options, but the internal DNS will allow the server to find itself.
- ConfigFile - path to my modified config.json without https enablement.
- DOMINO_VOLT_URL - the external base URL for the volt - must contain /volt-apps. Volt uses this internally to generate links to resources
Configuring the ports and https
Only port 80 is needed if all you want is web UI access.
Then you need to configure the reverse proxy in Synology Control Panel
And you just need to make sure the correct certificate is used.
Now you should be ready to run your container.
Final words
The Synology Docker is a nice toy for home/test usage. The UI is limited, so I would not recommend using this for production but works well for me so far. The described Volt setup is really simple, basically just to show that it can be done. There are still some issues I want to check:
- properly work with hostnames. With the current setup where the container name = server name, it would not work nicely when the server is connected to other servers.
- shutdown timeout - Synology UI doesn't allow me to specify the timeout. Domino needs a bit more time than those default 10s.
Many thanks again to Daniel Nashed and Thomas Hampel who are maintaining the domino-docker project that is capable of building these nice Volt/Domino Docker images.
Comments
Post a Comment