Skip to main content

HCL Volt on Synology NAS

After playing a bit with HCL Volt locally on my machine, I've decided it's time to make Volt easily accessible for my other devices. Especially, testing a mobile UI always works better when you can directly try it from different devices.
Luckily, I have bought a Docker enabled Synology NAS earlier this year and I'm already running few services there in a similar way. Now I just needed to make sure that I can run HCL Volt there too. I'll continue to use the domino-docker project because it allows me to create a pre-configured Volt environment. Synology Docker environment is a bit limited and the configuration doesn't allow you to use all the options that you can use directly from the Docker command line, but usually, you can find some workarounds if you need something specific.

My main goals were:

  • share the Docker image in my network
  • have a HCL Volt server with a publicly trusted certificate
  • have it accessible from both inside and outside of my local network
  • find a process that would allow me to easily test V12 pre-release docker images

Sharing the Docker image

In my previous post, I've shown how to build the HCL Volt Docker image. If you work on multiple machines, then you'll have to figure out a way how to share the image. One option is to export the image in the same way as HCL is using for the official images. It is good when you want to transfer the image to some unknown infrastructure, but if you want to do more experiments, you may want to host a private Docker registry. 

If you enable Docker on Synology, you are just adding the Docker host capability to it, not the ability to share the images using a registry, so you need to install this separately. When I first installed Docker on Synology I was a bit confused by the Registry tab in the configuration. The tab allows you to search in configured external registries.

There are nice tutorials online on how to create a Docker Registry on Synology. I've deployed Docker Registry 2.0 (of course using Docker) to my NAS and I've added a joxit/docker-registry ui web ui for easier management. I've installed them side by side, so the docker-registry-ui just serves the static content. It didn't work when I blindly when for latest tag as I need to use static tag image.  

Tutorials you can follow:

If you've configured authentication for your repository (which you should), you need to log in first using docker login command. e.g.:

docker login registry.pris.to

Once you sign in, the credentials are permanently stored in the operating system credentials store (this can be configured in .docker/config.json).

Then you just need to tag the image with repository prefix:

docker tag hclcom/volt registry.pris.to/hclcom/volt

and push it 

docker push registry.pris.to/hclcom/volt

If you have a web ui, you can directly check the result (I already had my image there, so it wasn't modified this time)


Few words of warning here ... If you keep experimenting with the image build process and keep pushing updated images to the registry over and over, keep in mind that the images are not being overwritten, they are always added as a new version. It makes sense as the registry doesn't know where the image is used and if the "old" version is not actually needed e.g. in some other image. You need to manually delete them and then work with the garbage collector in the registry to get rid of the blobs.

With your image in the registry, you can use it from different machines and of course, form the Synology too. You must add the registry url and mark it as active.

now you can search for the Volt image and use it

Configuring HCL Volt container

My desired setup is a bit more complex than the default behavior with the automatic generation of a self-signed certificate. I use the Synology built-in reverse proxy and I've configured it to use a wild cart Let's Encrypt certificate. It currently can't be automatically renewed because it needs a DNS record change to verify the domain ownership, but it's a single place where I have this certificate, so I can live with doing this every 3 months manually (and if not, I can probably script the DNS record change later). You can find the procedure here https://vdr.one/how-to-create-a-lets-encrypt-wildcard-certificate-on-a-synology-nas/.

I'm running just pure HTTP on the Domino server and reverse proxy takes care of the HTTPS. This means I can't enable automatic HTTPS redirect which is defined in the standard config.json. I need to supply my own config.

Preparing the data directory and config directory

Synology Docker doesn't work with native Docker volumes, so you normally just prepare a directory on your Synology volume and bind it to the container. In my case, I also need a special configuration file, so I have 2 directories:
  • /volume1/docker/volt/config
  • /volume1/docker/volt/data
When the data directory is bound to the container, it keeps the Linux permissions. Domino in the container is running with a non-root user notes (uid:1000) and may not be able to create the required subfolders/files if the owner is different. The easiest way to get around the problem for me was to SSH to my nas and just change the ownership to uid 1000. It doesn't matter that the uid is not used by the NAS, unless you want to manage the permissions also form the Synology UI. 

Alternatively, you can configure uid of the notes user in the container using DominoUserID environment variable.

Adjusting the config.json

I need to disable the https that is enabled in the provided config.json, but I could not just simply skip the file as there are other options that are required for Volt, e.g. session authentication. I've deleted most of the server document configuration and kept just:

Mapping the volumes

We must tell Docker where our configuration file and data folders are. 

I had to move the config.json outside of data directory, because it kept interfering with the folder permissions - in this case, it created the subfolders as root and the notes user could not access the folders again.

Configuring the environment variables

Environment variables are key to the container configuration. Most of them stayed as in the default sample, but there are some significant changes:
  • NoSSL - it's my custom flag to skip the certificate generation. You can just ignore it when using the standard build.
  • ServerName - must be the same as the name of the container. We are not able to pass the hostname using any Synology options, but the internal DNS will allow the server to find itself.
  • ConfigFile - path to my modified config.json without https enablement. 
  • DOMINO_VOLT_URL - the external base URL for the volt - must contain /volt-apps. Volt uses this internally to generate links to resources


Configuring the ports and https

Only port 80 is needed if all you want is web UI access.



Then you need to configure the reverse proxy in Synology Control Panel

And you just need to make sure the correct certificate is used.


Now you should be ready to run your container.

Final words

The Synology Docker is a nice toy for home/test usage. The UI is limited, so I would not recommend using this for production but works well for me so far. The described Volt setup is really simple, basically just to show that it can be done. There are still some issues I want to check:
  • properly work with hostnames. With the current setup where the container name = server name, it would not work nicely when the server is connected to other servers.
  • shutdown timeout - Synology UI doesn't allow me to specify the timeout. Domino needs a bit more time than those default 10s. 
Many thanks again to Daniel Nashed and Thomas Hampel who are maintaining the domino-docker project that is capable of building these nice Volt/Domino Docker images.



Comments

Popular posts from this blog

XPages Date Field Issue: Solving the One-Day Jump on Every Save

 A user reported a very strange issue - when a document with a date field is saved, it changes the value one day to the past. With every save. But only for some dates, not all. It turned out to be a mystery that goes deep into XPages and Notes/Java APIs. I've posted a sample on OpenNTF Discord and Serdar tried it on his server - no issue. But he uses the GMT zone and I have CET (Windows set to UTC+1 - Amsterdam, Berlin... to be precise). To cut it short, the issue is caused by daylight saving interpretation between Notes and Java. The date fields (because XPages have no notion of real date-only fields) are stored with 00:00 time component and for some dates the conversion back to Java Date resulted in 23:00 on the previous day. XPages that get the date component as String for the input field, which is then saved back as a previous day during document save. The app is full of date fields and I couldn't add custom logic to every save operation, so I tried to fix it at XPages conv...

HCL Domino 12.0.2, Engage 2022 and HCL Factory tour Milan

 I haven't published my recap after Engage this year and the recent HCL Factory tour in Milan is a great opportunity to write a summary about what's happening in HCL (mostly Domino) space. It's a mix of news about 12.0.2, future directions, and my impressions, so it can be a bit chaotic, but I got the impression that many people see it similarly.  Engage 2022 Engage 2022 was great (as always). I love the atmosphere in Brudges. I visited it once after Engage a few years ago and I was happy to come back. This was also the first time I had the opportunity to speak at Engage, which obviously made it a bit more stressful, but also more fun. Together with Domino Jams, HCL continued conversations with customers and partners about the future of their products at Engage. Many of these ideas were now discussed in greater detail in Milan, some of them were even demoed.  My main takeaways from Engage were: Nomad (web and mobile) are a great addition to Notes family Restyle is a great...

XPages EL/class-loader memory leak (now with solution)

 We have recently experienced OutOfMemory crashes of XPages app server. The server was recently upgraded to 12.0.1FP1, but we were getting some panic crashes in HTTP even before the upgrade (it was 9.0.1FP10). Our hopes were that the upgrade would stabilize the server, but it's not the case. At least now I start to see what's the problem.  update 8.12.2022 There were actually 3 different leaks. I have rewritten the article to be a bit more clear. I also re-run some of the tests on 9.0.1FP10, so I assume the problems are also in earlier versions. Problem 1 The server is hosting over 1000 NSF sharing the same design + some other custom apps. Not all NSFs are used via web as the app still has classic Notes UI in parallel, so it's a bit tricky to estimate the load. By using tell http xsp show modules I usually see around 350 NSFs active. We kept the default application timeout that should provide reasonable application recycling if it's not used continuously.  We started to...