I wanted to make this work for a very long time, but there were always some parts missing, so I could not get the full process running. Finally, the wait is over. The following paragraphs describe a way to build Notes/Domino apps automatically on a Jenkins server, allowing parallel builds and all "normal" continuous-integration behavior, without having to think too much about Domino specifics.
The Problem
Until now, I was running my automated builds of Domino apps using Jenkins in two ways:
The official headless-designer way, where you need to pass special commands to Domino Designer and hope for the best as the Designer sometimes gets stuck. I have this wrapped inside a Jenkins pipeline, so I have some control and can e.g. avoid parallel builds by using locks on Jenkins, but still, sometimes it just dies. Some of my headless builds run for more than 30 minutes, so it's really hard to quickly spot an issue without actually connecting to the machine and looking at what's happening in the UI (if actually anything).
NSF ODP tooling way, using a Domino server that is running on the same machine. This works well even for more complex builds that involve custom extension libraries, but again, I would not dare to run multiple builds at the same time.
All lives on one old Windows 8 VM that I've created years ago and honestly, I'm afraid to touch it. This was quite OK during the old 9.0.1 days where there were not so many changes in the product, but since HCL started to update the JVM and other parts of the platform, it's necessary to use the correct version during the builds. My customers are using everything from 9.0.1FP6 to 11.0.1FP2, so I need to be able to use different versions for different apps and I don't want to keep more VMs around.
So I basically wanted:
- Target specific Domino versions during the build
- Run builds in parallel with no worries.
- Use pre-created set of IDs to sign/build the databases
- Keep the existing processes for existing apps which I in some cases will probably never need to build again
The Solution
Domino is more docker/container friendly now and so is Jenkins, so making all three work together sounds like the best approach to me. A high-level summary is:
- Keep my current Jenkins VM as is, so I can continue to use it for existing builds
- Add a remote Jenkins agent that will take care of the new builds (or more if I want to)
- Create a Domino docker builder image that will contain Domino, Maven, NSF-ODP tooling, Domino update site
Docker images
Domino builder image
- After the one-touch setup, the server starts automatically, which I don't need/want (and afaik there is no way to stop it if you run it during image build)
- It can't use pre-created ids unless you set it up as an additional server
Creating the domino-docker base image
Own image with custom silent setup
Then we can begin with our own Dockerfile using it as the base image:
As I've mentioned before, I was to use existing ids. Even this image can't do that as the pre-defined pds file contains options to create new ids that can't be overridden using cfgdomserver.jar. Luckily, I've tested this while ago and I already had ids and a custom pds file ready. I just need to copy them to the image.
Few words of warning here - the ids and passwords will get stored in the image. Even if you delete them later, they will be accessible as a hidden layer in the image. I'm fine with that as the server actually won't be ever started and this image should never leave my environment. If you don't want this to happen, you can check how domino-docker project is downloading the software packages during the build.
If you want to know more about Domino silent setup, just check the official documentation https://help.hcltechsw.com/domino/11.0.1/admin/inst_usingdominosilentserversetup_t.html
Then I add my customized scripts for Domino setup and Maven entrypoint.
Script domino_docker_setuponly.sh is a trimmed down version of the original docker_prestart.sh that only does what I need and ends with the silent setup - $LOTUS/bin/server -silent /local/ids/fullsetup.pds /local/ids/pwds.txt
Adding Maven
NSF-ODP tooling uses Maven to run the builds (and manage the dependencies etc.). We need to add it to our image. Since I've started with the default domino-docker image, it's Centos 8 based, so I can just add the packages. Before doing that I must switch to root as the base image is configured to run as user notes.
During testing, I've discovered a problem with libnsl.so. Daniel described it here. I had to add a legacy support package to fulfill the requirement.
Domino server setup
Now it's getting more interesting. First, we do our silent server setup. We need/should to switch back to notes user before that.
Domino is now configured, so we have names.nsf, server.id in the data directory etc.
Domino Update site
NSF-ODP tooling needs a p2 update site that is preferably extracted from the same Domino version. Luckily, Jesse has another tool for this in his toolbox - https://github.com/OpenNTF/generate-domino-update-site
We just need to configure Maven to use the OpenNTF artifactory. The easiest way is to add a predefined Maven settings.xml. Then we can simply use maven to run it.
COPY operation always copies the files as root, so we need to change the
owner in order to make the directory accessible for the notes
user.
It's probably not necessary to specify the src as
Domino is installed in one of well-known paths.
Adjusting the entry-point
We need to adjust the entry point, so it works well with Jenkins that expects the commands to be passed through. Without it, the Domino server itself would get started, not our Maven build. My mvn-entrypoint.sh is again a trimmed down version of a script from the standard image - here. I don't need to pre-populate the .m2 folder now as I'm doing this already, so for now, it's just a pass-through exec.
Building
That's all. Now we can build an image that can be used for NSF-ODP builds. It takes some time as we are doing yum update during the build and we are not using cached Maven repository, so the creation of update-site also downloads a lot from the internet.
(no-cache on the screenshot is just to show the full build time).
The image could be further optimized, but as I will use it only on a local network, I don't worry too much about it.
Jenkins agent with Docker support
As I've mentioned earlier, I'm running my Jenkins on a system that is not capable of running containers. Luckily, Jenkins has very flexible support for distributed builds. The agents can be physical machines, vms, or even containers. Since we are doing a docker based build here for our Domino apps, we can try to use a container as our Jenkins agent too. The Jenkins agent in this setup can run anywhere, it just needs to have a way to call the master and register itself.
Docker in Docker
For our use case, the setup is a bit tricky. We'd run our Jenkins inside docker and this container would try to start our Domino builder container inside. There are several resources on the Internet that discuss this - primarily all point to this article. The cleanest solution seems to be to actually run the docker out of docker by giving the container access to hosts docker.sock and running all commands directly on the host.
In theory, I could try to switch to podman, which is daemonless, so it should be possible to run it inside a container in an easier way, but this may cause problems on its own and the Jenkins/Docker setup I'm using is well documented online.
Adding Docker CLI to the image
We start with the official Jenkins inbound-agent image. Then we install docker using standard installation scripts. Since the official image is running as a non-root user, we need to make some security tweaks that I'll explain below. Here is the Dockerfile:
FROM jenkins/inbound-agentRunning the container
When we run the container, we need to bind the docker socket /var/run/docker.sock from the host to the container, which will allow the container to run the docker commands on the host.
jenkins-agent-docker:latest \
-url <jenkins-master-url> <secret> <agent-name>
The problem is that normally only docker group users can do run docker commands and the users/groups in the container live on their own, so they have different uids and gids. In order to get them in sync, I'm creating a docker group in the container that has the same groupId as the docker group on the host (in this case 1001). It's part of the image, so if the host uses a different id, I either need to build a new image or adjust the id.
Domino app project itself
So far we've created all the necessary images. Now we can proceed to the project itself. For testing, I'm using just a database with one XPage. It follows the NSF-ODP default convention with source in odp folder. Interesting bits are in pom.xml and Jenkinsfile for the pipeline.
First, the pom.xml contains just group/artifactid and uses the defaults for the rest (see NSF-ODP tooling for all the options)
Jenkins file is more interesting.
First, we tell it what image we want to use during the build. I also specify a Docker label, which tells it to run it only on agents that I've labeled as docker enabled. Optionally, we tell it to bind jenkins-m2-repo volume, so we don't download all the dependencies with every build.
As the last step, we publish the created nsf, so we can download it from the Jenkins UI.
Nothing else is needed in the Domino projects that use this infrastructure. If I want, I can run the build from multiple branches, using different builder images with different Domino versions, nice.
Currently, there is a small issue in the NSF-ODP tooling which may cause the build to fail when it does not find any Java source file. I had to add one empty Java class to my sample project as a workaround.
Putting this all together
We should have all that we need by now. Let put this all together. To get the overall idea of what components are involved, check the graphics:
Yellow boxes are containers created from the images that we've prepared earlier. The involved parties are:
- git repository - where we store the Domino app project
- container registry - where we store our images, so docker hosts can get them
- Jenkins master - my original Jenkins server, orchestrating the distributed agents
- Docker host - machine hosting my containers with:
- Jenkins agent - remote Jenkins agent, connected to and managed by the master
- Domino builder - a temporary container that is used to build the app
- docker.sock - "tunnel" to allow the agent to create containers on the host
- jenkins-data - volume that is used to share Jenkins workspace with the builder container
- jenkins-m2-repo - repository to cache the Maven repository, so we don't download everything every time
Pushing images to a private registry
Since I was building the images on my local machine, I need to make them accessible to the docker host. I can export/import them manually, but the best way is to share them using a private registry. All you need to do is to assign a tag based on the layout of your docker registry and push it. In my case e.g.:
and do the same for the builder image
Now they can be used by any machine that has access to the registry.
Configuring Jenkins master
We need to configure Jenkins master, so it expects a connection from our agent. Inside Manage Jenkins > Manage Nodes and Clouds create a New Node
The important pars are:
- # of executors - for now, I allow just 1
- Remote root directory - I used a folder that is then mapped to the volume jenkins-data
- Labels - I've specified Docker label, so I can limit the builds only to docker-enabled agents
- Usage - only build jobs with label expressions (as I have other builds like headless designer too which can't run on this agent)
- Launch method - Launch agent by connecting it to the master - our container needs to tell the master that it is available
- Tools location - I had to adjust the Git to point to git, without this, the agent tried to execute git.exe, which probably leaked somehow from my master, which runs on Windows.
Once you save it and reopen the agent page on the master you should find further setup instructions and especially the secret. E.g.:
Copy the secret (xxxx... in the sample) as we'll need it later.
Preparing the docker host
My docker host is a plain Centos8 VM, only running docker. Since I need to use images that I've prepared elsewhere, I need to connect the host to my local repository. First, we need to log in:
After providing the credentials, I got a warning that the credentials are being stored in ~/.docker/config.json. This is important to know as our Jenkins agent may need to pull images too, so it needs to have access to this file too. The easiest way was to add read access to everyone on that file and bind it when starting the agent.
The second customization was changing the docker group id. I've mentioned that I've created the image with gid 1001 (which originates from my WSL2 environment). Checking this machine, the gid was 982, so I've decided to change it.
Running the agent
We should have everything prepared to run our Jenkins agent and start the build. First we start the agent using
As mentioned above, we bind 3 "volumes"
- docker.sock - to give the container access to docker daemon
- config.json - to give it access to the credentials for our repository in case it needs to download an image
- jenkins-data - volume to share the data easily between containers
The container then needs a base URL for the Jenkins master, secret that we've saved after preparing the configuration for this agent and agent's name.
Running the build
Our infrastructure is ready, so we just need to configure and execute our build. First, we create a new Pipeline project and use Peopeline script from SCM. Then we specify access to your git repository and branch that you want to build.
(the very first build will run longer, the screenshot is with pre-populated maven cache and images already available on the host)
You can review the Console output to see the details. Interesting are commands that create the builder container and execute the build
Further details
If anyone wants to start with a fresh environment and build similar infrastructure, it may help them to know what I'm actually running while building this:
- Main work machine - Windows 10
- WSL2, Docker Desktop, Ubuntu-20.04 subsystem for building of images
- VMware Workstation with Windows-based VMs for Domino Designer
- ESXi lab machine
- Jenkins master - Windows 8 VM, headless Designer and a local Domino server
- Docker host - Centos8 VM with docker
- Synology NAS - with Docker
- GitLab container - as both Git server and image registry
Future
I plan to start to use this setup for all new Domino-based builds. I hope that NSF-ODP tooling will not cause any problems for apps that worked well when I used the headless designer. I hope the V12 one-touch setup will allow me to switch to the official image, but it's not that important as I'm not running the Domino server at all and I doubt that HCL will support issues in my build environment anyway.
Conclusion
Repositories
- builder image - https://github.com/mpradny/docker-appbuild
- Jenkins agent - https://github.com/mpradny/jenkins-agent-docker
- database sample - https://github.com/mpradny/docker-build-sample
References
- https://github.com/IBM/domino-docker
- https://github.com/OpenNTF/org.openntf.nsfodp
- https://github.com/OpenNTF/generate-domino-update-site
- https://www.jenkins.io/doc/tutorials/build-a-java-app-with-maven/
- https://itnext.io/docker-inside-docker-for-jenkins-d906b7b5f527
- https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
- https://help.hcltechsw.com/domino/11.0.1/admin/inst_usingdominosilentserversetup_t.html
- https://github.com/jenkinsci/docker-inbound-agent
Comments
Post a Comment