The purpose of this repository is to provide a set of scalability and performance tests for Red Hat Quay and Project Quay. These tests are not intended to necessarily push Quay to its limit but instead collect metrics on various operations. These metrics are used to determine how changes to Quay affects its performance. A side-effect of these tests is the ability to identify bottlenecks and slow operations.
The test suite is designed to run both in-cluster and on local and is fairly simple to start. There are a few prerequisites and they will be explained below.
- Deploy a Kubernetes environment. The tests will run within the cluster.
- Deploy Quay, itself.
- In Quay, as a superuser (important), create an organization for testing purposes. Within that organization, create an application for testing purposes. Within that application, create an OAuth Token with all permissions (checkboxes) granted. Hold on to this token as it will be used later.
v0.1.0 changes:
- Tests are run using locust framework
- Concurrent testing is done using Locust in distributed mode
- Metrics are now exported as Prometheus metrics
v0.0.2
changes:
- Python is used for orchestrating and defining all tests.
- Tests now run within a kubernetes cluster.
- Registry tests are executed concurrently using parallel kubernetes jobs.
- Reduced the number of steps required to run the tests.
known issues:
- The orchestrator job does not cleanup the other jobs it creates. There is no owner attribute specified so they are not cleaned up when the main Job is deleted either.
- The image used for registry operations has an issue where
podman buildwill leave fuse processes running after it has completed. This can cause a situation where all available threads are used. Due to this issue, the batch size for each Job in the "podman push" tests are limited to 400. - The container image uses alpine:edge. This is the only version of Alpine which includes podman. Alpine was chosen as there are complications which arise from trying to perform build/push/pull operations within Kubernetes and Openshift. It seemed to eliminate some of those issues. Eventually, a stable image should be used instead.
- The output logging of some subprocesses is broken and creates very long lines.
- The primary Job does not watch for the failure of its child Jobs.
0.0.1
- The original implementation.
(TODO) This section still needs to be written.
The project expects the following environment variables:
- QUAY_USERNAME: Username to log into Podman
- QUAY_PASSWORD: Password for the above user
- QUAY_HOST: The url where Quay is hosted (Eg: http://localhost:8080)
- CONTAINER_HOST: The registry domain where container images will be pushed (Quay's domain to test Quay. Eg: localhost:8080)
- OAUTH_TOKENS: A list of authorization tokens to enable API calls(On Quay: Create an organization followed by creating an application in the organization. Generate token for the application. Eg: '["oauthtoken1", "oauthtoken2"]')
- CONTAINER_IMAGES: A list of container images with tag (if not defaults to latest tag) to run tests against. Eg: '["quay.io/prometheus/node-exporter:v1.2.2", "quay.io/bitnami/sealed-secrets-controller:v0.16.0"]')
From the main directory, the docker image can be built using the command:
docker build -t perf-test -f Dockerfile-locust .docker run -e QUAY_USERNAME="username" \
-e QUAY_PASSWORD="password" \
-e CONTAINER_HOST="localhost:8080" \
-e QUAY_HOST="http://www.localhost:8080" \
-e OAUTH_TOKENS='["abc", "def"]' --privileged \
-e CONTAINER_IMAGES='["abc", "def"]' --privileged \
-p 8089:8089 --name quay-test -d perf-test`
Upon successful starting of the container, the locust dashboard is accessible on port 8089.
The minimum number of users spawned to run the tests must be at least equal to
the number of users defined in testfiles/run.py to run all user classes.
The tests are run via locust in distributed mode. There is a single master
which controls multiple worker pods. The number of replicas for the workers are
defined in deploy/locust-distributed.yaml.example file.
Copy the deploy/locust-distribyted.yaml.example file to deploy/locust-distributed.yaml
cp deploy/locust-distribyted.yaml.example deploy/locust-distributed.yaml
- Replace the placeholder
NAMESPACEwith your namespace - Edit the
ConfigMapquay-locust-configin thedeploy/locust-distributed.yamland set the variables accordingly - If you want to use a different image update the
imagefield in the master and worker deployment - Change the
replicasfield in theworkerDeployment to the number you need (default is 2 workers)
Deploy locust on the cluster by running:
kubectl apply -f deploy/locust-distributed.yaml
This should deploy locust in distributed mode on the cluster. To access the web UI port-foward it locally
kubectl port-forward svc/locust-master -n <namespace> 8089