Hermes WLS (Warehouse and Logistics Service) functions as middleware between NLNs (National Library of Norway) catalogues and storage systems. The goal with the service is to unite all the storage systems and catalogues used at NLN with a common interface.
The benefits of this approach are:
- Decoupling of the storage systems and catalogues, which makes it easier to change systems if needed.
- Makes it easier for end users to access material stored in different systems.
- Storage systems don't need to know which catalogue to inform about changes, as the service will handle this.
More features and benefits will be added as the service is developed.
- Hermes the Warehouse & Logistics Service
- Technologies
- Running the Application
- Usage
- Dependencies
- Development
- Configuration
- Deployment
- Contact
- License
Hermes WLS uses the following technologies:
- Eclipse Temurin for the Java runtime.
- Kotlin for the application code.
- Maven for project management.
- Spring Boot for the application framework.
- MongoDB for data storage.
- Kafka for event streaming.
- Keycloak for client authentication and authorization.
- Swagger for API documentation.
- Docker for containerization.
- Harbor for container registry.
- Kubernetes for deployment and orchestration.
- Argo CD for application deployment from GitHub and management.
- Vault for secrets management.
- GitHub Actions for CI/CD.
As of now, the service is in the early stages of development, so there's a chance of major changes occurring. Check the pom.xml file for the most up-to-date list of dependencies. As well as the Dockerfile for the current Docker image setup. You might also want to check the GitHub Actions for the current CI/CD setup.
The Warehouse Logistics Service is a Spring Boot application that can be run locally or in a container.
It is recommended to use Docker or containerd (via nerdctl) for local testing, as the service is designed to run in a containerized environment.
Additionally, this service depends on other applications, such as MongoDB, which can be spun up using the provided Docker Compose file.
For development an IDE such as IntelliJ IDEA is highly recommended.
Use these commands to build and run the application locally:
# Package the application, will execute the tests too
mvn clean package
# Run it locally
java -jar target/wls.jarAfter building the JAR file, it can be used to build a Docker image using the following commands:
# Move the jar to the Docker directory
cp target/wls.jar docker/wls.jar
# Use Docker Buildx to build the Docker Image
docker buildx build --platform linux/amd64 -t wls:latest docker/If you need local setup of Docker on Arch Linux, install and enable the required tooling:
pacman -S --needed docker docker-buildx docker-compose
systemctl enable --now dockerCaveats:
- When building the Docker image outside NLNs network, the build will fail, as it won't be able to access the internal Harbor instance which is used to pull the base image.
In this case change the
FROMline in the Dockerfile toFROM eclipse-temurin:21-jdk-nobleand build the image locally. - Do not attempt to push the image to Harbor manually, as it will fail. The image is built and pushed to Harbor automatically by the CI/CD pipeline.
A pre-built image can be found on NLNs internal Harbor instance under the mlt namespace.
The images are built based on the main branch as well as project tags, and can be pulled using the following command:
# Pull the latest image
docker pull harbor.nb.no/mlt/wls:latest
# Or pull a specific tag (either a GitHub tag or "main" for the latest main branch image)
docker pull harbor.nb.no/mlt/wls:<TAG>With the image either built or pulled, WLS can be run using the following command:
docker run -p 8080:8080 -e SPRING_PROFILES_ACTIVE="local-dev" harbor.nb.no/mlt/wls:<TAG> # For pulled image
docker run -p 8080:8080 -e SPRING_PROFILES_ACTIVE="local-dev" wls:latest # For locally built imageAfter building the JAR file, it can be used to build an image with nerdctl (which uses BuildKit):
# Move the jar to the Docker directory
cp target/wls.jar docker/wls.jar
# Build image with containerd/BuildKit
nerdctl build --platform linux/amd64 -t wls:latest docker/If you need local setup of "rootful" containerd on Arch Linux, install and enable the required tooling:
pacman -S --needed containerd runc nerdctl cni-plugins buildkit iptables-nft rootlesskit
systemctl enable --now containerd
systemctl enable --now buildkitThe same caveats from Using Docker apply when building outside NLN's network.
With the image either built or pulled, WLS can be run using the same commands as in Using Docker, just use nerdctl instead of docker.
For local development and testing an IDE like IntelliJ IDEA is recommended.
Its default Spring run configuration for the application works well, just set the SPRING_PROFILES_ACTIVE / Active Profiles variable to local-dev.
Ensure that you have containers from Docker Compose file running before running the application.
You can also use the provided run config to run the application locally. It comes with all environment variables set to default values, which you can copy to another run config and override values as needed. Be careful not to commit modified run configs to the repository.
App has a default configuration for local development however, in your run configuration you can override the default values by setting the environment variables in the run configuration. Currently supported config values are listed in the Configuration section.
To run the tests, use the following command:
mvn clean testIt should run all the tests in the project and provide a report at the end.
You can see the results in both the console and in the target/surefire-reports directory.
The CI/CD pipeline will run the tests automatically when a pull request is created.
It will create a report and provide it in the pull request.
It can be accessed by clicking on the Details link in the Checks section for the Deploy Project / JUnit Tests check of the pull request.
As we do not have a test instance of Keycloak, tests requiring authentication are disabled when Spring Profile is set to pipeline.
In an IDE like IntelliJ IDEA, the tests can be run by right-clicking on the src/test/kotlin directory in the project and selecting Run tests in 'kotlin'.
Further configuration can be done in the Run/Debug Configurations menu.
To run tests with authentication, you will need to set the spring.profiles.active system property to local-dev in the run configuration.
This can be easily done by adding the following to the VM options field in the run configuration:
-ea -Dspring.profiles.active=local-devWe have also a default run configuration for running all tests in a local environment. It comes with all the required configurations already set up, so we recommend using it instead of creating a new one.
Of course this requires you to have the services running locally, see the Local Dependencies section for more information.
Hermes WLS provides a REST API for interacting with the service.
The API is documented using Swagger, and can be accessed by running the application and navigating to the following URL:
http://localhost:8080/hermes/swagger
As the staging and production environments are deployed on internal networks, the deployed API is not accessible from the outside.
If you need to access the API in these environments, you will need to use a VPN connection to NLN's network.
The API is accessible at the usual URL, with the /hermes suffix.
Regardless of what method you used to run the Hermes WLS, it has other services and applications that it depends on.
To run these, use the provided Docker Compose file.
The compose stack can be run with either <docker|nerdctl> compose.
This will spin up the following services:
- MongoDB: database for the application
- Use the following credentials to log in:
- Username:
wls - Password:
slw
- Username:
- Use the following credentials to log in:
- Email: uses a fake SMTP server for testing email functionality locally
- Can be accessed at:
http://localhost:1080
- Can be accessed at:
- Keycloak: authentication and authorization service for the application
- Can be accessed at:
http://localhost:8082 - Use the following credentials to log in:
- Username:
wls - Password:
slw
- Username:
- Can be accessed at:
- Kafka: a message queue system for handling inventory statistics messages
- Can be accessed at:
kafka:9092(requires/etc/hostsentry, see below) orlocalhost:9092 - You can use the Kafka plugin for IntelliJ to view topics, queues, and their contents
- Kafka plugin homepage
- Connect to either
kafka:9092orlocalhost:9092- both work
- Can be accessed at:
- Mockoon: a service used for mocking web server endpoints, used to test callback functionality
- Endpoints are available at:
http://localhost:80/itemandhttp://localhost:80/order - See below on how to enable mapping
localhosttocallback-wls.no - To read the logs with request and response data, run:
<docker|nerdctl> compose logs -f mockoonto use compose log streaming by service name<docker|nerdctl> logs -f wls-mockoon-1to use container log streaming by container name- Make sure that the container name matches the actual name from running
<docker|nerdctl> compose up - You can check it using
<docker|nerdctl> ps
- Endpoints are available at:
For MongoDB replica set auth in local Compose, provide a keyfile at docker/mongo/secrets/keyfile.key before startup.
The Compose setup mounts this file read-only and copies it into container-local tmpfs at runtime, so host file ownership is never changed.
Run bash docker/setup-mongo-keyfile.sh before startup: it creates the keyfile if missing and validates it if already present.
To start the services, run the following command:
cd docker
<docker|nerdctl> compose up -d
# Alternatively from project root
<docker|nerdctl> compose -f docker/compose.yaml up -dAdditionally, to use the Mockoon service for mocking and logging callbacks to catalogue systems, you will need to make it discoverable on the host machine.
The default MONGODB_URI uses directConnection=true, which bypasses replica set host discovery, so a mongo-db hosts entry is not strictly required when using the default configuration.
However, for convenience and choice between localhost:27017 and mongo-db:27017 you can add the mongo-db entry as shown below.
You can also add an entry for Kafka to have a consistent connection string (kafka:9092) everywhere.
Add the following lines to your /etc/hosts file if you are on Linux --- if you are using Windows or Mac switch to a real OS --- and restart your machine:
127.0.0.1 callback-wls.no
127.0.0.1 mongo-db
127.0.0.1 kafka
To stop the services, run the following command:
<docker|nerdctl> compose down
# Alternatively from project root
<docker|nerdctl> compose -f docker/compose.yaml down
# Optional step for clean restart (remove volumes in extra command as `system prune --volumes` does not work)
<docker|nerdctl> system prune -af
<docker|nerdctl> volume prune -afIn addition to the local dependencies, the Hermes WLS also depends on the following services:
- Kubernetes: for deployment and orchestration of the application
- Harbor: for hosting the Docker image of the application
- Vault: for secrets management in the deployment pipeline
All of these services are managed by the NLN's Platform team and are not needed for local development. However, they are needed to deploy the application to the staging and production environments. The Platform team also maintains MongoDB, Kafka, Email Server, and Keycloak in the staging and production environments.
The development of the Hermes WLS is done in "feature branches" that are merged into the main branch.
Name each feature branch after its JIRA code followed by a short summary of the feature.
For example mlt-0018-add-readme.
Make sure that your development tool supports the EditorConfig standard, and use the included .editorconfig file.
IntelliJ IDEA supports formatting the code according to the .editorconfig file, and can be set up in the Editor settings.
Furthermore, we use Spotless to format code automatically before each commit.
To set up Spotless, you can use IntelliJ's Spotless Applier plugin or run the following command:
mvn spotless:applyLastly, you should run the tests to ensure that the code is running as expected. The CI/CD pipeline will run the tests automatically when a pull request is created. However, it's better to run them on your machine before pushing as it will save time and resources. To run tests, use the following commands:
mvn clean testThe following environment variables are relevant to configuring the application. When running the application locally, or in a pipeline, all of these variables are set automatically. However, when deploying to staging or production, they must be set manually.
KAFKA_BOOTSTRAP_SERVERS: Is used to set the Kafka bootstrap servers (default islocalhost:9092)KEYCLOAK_ISSUER_URI: Is used to point at the Keycloak server used for authentication (default ishttp://localhost:8082/auth/realms/wls)KEYCLOAK_TOKEN_AUD: Is used to set the audience of the Keycloak JWT token, it must match with the issued token audience value, which is different between environments (default ishttp://localhost:8080)SPRING_PROFILES_ACTIVE: Is used to set the active Spring profile, uselocal-dev,stageorprod(default ispipeline)EMAIL_SERVER: Is the URL to email server used to send emails (default islocalhost)EMAIL_PORT: Is the port used by the email server (default is1025)MONGODB_URI: Is the connection URI for our MongoDB instances, in formmongodb://username:password@host1:27017,host2:27017,host3:27017/database(default ismongodb://wls:slw@localhost:27017/wls?replicaSet=rs0&authSource=wls&directConnection=true)CALLBACK_SECRET: Is the secret key used for signing outgoing callbacks (default issuperdupersecretkey)SYNQ_BASE_URL: Is the base URL used for communicating against SynQ (default ishttp://localhost:8181/synq/resources)KARDEX_ENABLED: Is used to enable or disable the Kardex adapter (default isfalse)KARDEX_BASE_URL: Is the base URL used for communicating against Kardex (default ishttp://localhost:8182/kardex)LOGISTICS_ENABLED: Is used to enable or disable the Logistics API (default istrue)ORDER_HANDLER_EMAIL: Is the email address where orders are sent to (default isdaniel@mlt.hermes.no)ORDER_SENDER_EMAIL: Is the email address used as the sender for orders (default ishermes@mlt.hermes.no)TIMEOUT_SMTP: Is the timeout in milliseconds for SMTP connections (default is8000)TIMEOUT_MONGO: Is the timeout in seconds for MongoDB operations (default is8)TIMEOUT_INVENTORY: Is the timeout in seconds for inventory related operations (default is10)TIMEOUT_STORAGE: Is the timeout in seconds for storage related operations (default is10)HTTP_PROXY_HOST: Is the host used for HTTP proxy (default islocalhost)HTTP_PROXY_PORT: Is the port used for HTTP proxy (default is3128)HTTP_NON_PROXY_HOSTS: Is the list of hosts that should not be proxied (default islocalhost|127.0.0.1|docker)
The section Running the Application describes how to run the application locally. Therefore, this section will focus on how to deploy the application to the staging and production environments.
Deployment to both environments is handled by their respective Kubernetes deployment files. They describe the deployment, service, and ingress for the application. There is very little difference between the two files, as the environments are very similar. They mostly deal with resource limits and requests, as well as the number of replicas.
These files are no longer located in this repository, as they are managed by Argo CD. In both cases the deployment is handled by the CI/CD pipeline and Argo CD.
To deploy the application to the staging environment, push new changes to the main branch.
The repository is set up to only accept merges to the main branch through pull requests.
Therefore, to deploy to the staging environment, create a pull request and merge it to the main branch.
Actions in the pull request should test the application to ensure that it is working as expected.
When a pull request is merged to the main branch, the CI/CD pipeline will build the application, create a Docker image, and push it to Harbor.
Then the Argo CD application controller will deploy the application to the staging environment using the new Docker image.
The deployment will be done automatically.
To deploy the application to the production environment, create a new tag in the repository.
The tag should be in the format vX.Y.Z, where X, Y, and Z are numbers, following the semantic versioning standard.
This will trigger the CI/CD pipeline which will deploy the application to the production environment. This is quite similar to the staging deployment and is also handled by Argo CD. You will have to manually approve the deployment in GitHub action UI.
The project is maintained by the National Library of Norway organization. It is owned by the "Warehouse and Logistics" team (MLT).
For questions or issues, please create a new issue in the repository.
You can also contact the team by email at mlt at nb dot no.
This project is licensed under the MIT License.