Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 23 additions & 55 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ identify bottlenecks and slow operations.

## Quickstart

The test suite is designed to run in-cluster and is fairly simple to start.
The test suite is designed to run both in-cluster and on local and is fairly simple to start.
There are a few prerequisites and they will be explained below.

### Prerequisites
Expand All @@ -24,52 +24,6 @@ There are a few prerequisites and they will be explained below.
permissions (checkboxes) granted. Hold on to this token as it will be used
later.

### Execution

The test suite will run as a collection of jobs within the Kubernetes cluster.
It is recommended to use a separate namespace for the tests.

In this repository, there is a Job YAML file which will deploy the performance
tests. This YAML file specifies some environment variables which should be
overridden for your specific environment. The deployment file also creates
a service account used for the Job(s) and deploys a Redis instance used as a
central queue.

1. Ensure the Job can run privileged. In Openshift, you may have to run
`oc adm policy add-scc-to-user privileged system:serviceaccount:$NAMESPACE:default`
2. Edit the deployment file `deploy/test.job.yaml`
1. Change `QUAY_HOST` to the value of your Quay deployment's URL. This
should match the value of `SERVER_HOSTNAME` in Quay's `config.yaml`.
2. Change `QUAY_OAUTH_TOKEN` to the value of the token you created for
your application during the prerequisites.
3. Change `QUAY_ORG` to the name of the organization you created during
the prerequisites. Example: `test`.
4. Change `ES_HOST` to the hostname of your Elasticsearch instance.
5. Change `ES_PORT` to the port number your Elasticsearch instance is
listening on.
3. Deploy the performance tests job: `kubectl create -f deploy/test.job.yaml -n $NAMESPACE`

At this point, a Job with a single pod should be running. The job will output
a decent amount of information to the logs if you'd like to watch its progress.
Eventually, the Job gets to a point where it will perform tests against the
registry aspects of the container (using podman) and will create other Jobs to
execute those operations.

## Environment Variables

The following environment variables can be specified in the Job's deployment
file to change the behavior of the tests.

| Key | Type | Required | Description |
| --- | ---- | :------: | ----------- |
| QUAY_HOST | string | y | hostname of Quay instance to test |
| QUAY_OAUTH_TOKEN | string | y | Quay Application OAuth Token. Used for authentication purposes on certain API endpoints. |
| QUAY_ORG | string | y | The organization which will contain all created resources during the tests. |
| ES_HOST | string | y | Hostname of the Elasticsearch instance used to store the test results. |
| ES_PORT | string | y | Port of the Elasticsearch instance used for storing test results. |
| BATCH_SIZE | string | n | Number of items to pop off the queue in each batch. This primarily applies to the registry push and pull tests. Do not exceed 400 until the known issue is resolved. |
| CONCURRENCY | int | n | Defaults to 4. The quantity of requests or test executions to perform in parallel. |

## Changelog

**v0.1.0**
Expand Down Expand Up @@ -118,11 +72,10 @@ known issues:
### Setup

The project expects the following environment variables:
- PODMAN_USERNAME: Username to log into Podman
- PODMAN_PASSWORD: Password for the above user
- PODMAN_HOST: The url of the host registry where images will be pushed
- QUAY_HOST: The url where Quay is hosted
- OAUTH_TOKEN: The Authorization Token to enable API calls(On Quay: Create an organization followed by creating an application in the organization. Generate token for the application.)
- `QUAY_USERNAME`: Username to log into Podman
- `QUAY_PASSWORD`: Password for the above user
- `QUAY_HOST`: The url where Quay is hosted
- `OAUTH_TOKEN`: The Authorization Token to enable API calls(On Quay: Create an organization followed by creating an application in the organization. Generate token for the application.)

### Building

Expand Down Expand Up @@ -156,13 +109,28 @@ the number of users defined in `testfiles/run.py` to run all user classes.

The tests are run via locust in distributed mode. There is a single master
which controls multiple worker pods. The number of replicas for the workers are
defined in `deploy/locust-distributed.yaml` file.
defined in `deploy/locust-distributed.yaml.example` file.

Copy the `deploy/locust-distribyted.yaml.example` file to `deploy/locust-distributed.yaml`


```
cp deploy/locust-distribyted.yaml.example deploy/locust-distributed.yaml
```

Edit the `ConfigMap` `quay-locust-config` in the
`deploy/locust-distributed.yaml` and set the variables accordingly
1. Replace the placeholder `NAMESPACE` with your namespace
2. Edit the `ConfigMap` `quay-locust-config` in the `deploy/locust-distributed.yaml` and set the variables accordingly
3. If you want to use a different image update the `image` field in the master and worker deployment
4. Change the `replicas` field in the `worker` Deployment to the number you need (default is 2 workers)

Deploy locust on the cluster by running:

```
kubectl apply -f deploy/locust-distributed.yaml
```

This should deploy locust in distributed mode on the cluster. To access the web UI port-foward it locally

```
kubectl port-forward svc/locust-master -n <namespace> 8089
```
151 changes: 151 additions & 0 deletions deploy/locust-distributed.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
apiVersion: v1
kind: Namespace
metadata:
name: NAMESPACE
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: NAMESPACE
name: locust-config
data:
QUAY_HOST: quay-app.syed-py2-quay
QUAY_USERNAME: admin
QUAY_PASSWORD: password
OAUTH_TOKEN: 4tW1FMzhHOZ73uy5R4H2MnOGOzzVAxLz7Mm2B965
REGISTRY_AUTH_FILE: /tmp/auth.json
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
role: locust-master
name: locust-master
namespace: NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
role: locust-master
template:
metadata:
labels:
role: locust-master
spec:
containers:
- image: quay.io/syed/quay-perftest-locust:latest
imagePullPolicy: Always
name: locust-master
command: ["locust"]
args: ["-f", "/quay-performance-scripts/locustfiles/run.py", "--master"]
env:
- name: QUAY_HOST
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_HOST
- name: QUAY_USERNAME
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_USERNAME
- name: QUAY_PASSWORD
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_PASSWORD
- name: OAUTH_TOKEN
valueFrom:
configMapKeyRef:
name: locust-config
key: OAUTH_TOKEN
- name: REGISTRY_AUTH_FILE
valueFrom:
configMapKeyRef:
name: locust-config
key: REGISTRY_AUTH_FILE
ports:
- containerPort: 5557
name: comm
- containerPort: 5558
name: comm-plus-1
- containerPort: 8089
name: web-ui
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
role: locust-worker
name: locust-worker
namespace: NAMESPACE
spec:
replicas: 2
selector:
matchLabels:
role: locust-worker
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
role: locust-worker
spec:
containers:
- image: quay.io/syed/quay-perftest-locust:latest
imagePullPolicy: Always
name: locust-worker
command: ["locust"]
args: ["-f", "/quay-performance-scripts/locustfiles/run.py", "--worker", "--master-host=locust-master"]
env:
- name: QUAY_HOST
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_HOST
- name: QUAY_USERNAME
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_USERNAME
- name: QUAY_PASSWORD
valueFrom:
configMapKeyRef:
name: locust-config
key: QUAY_PASSWORD
- name: OAUTH_TOKEN
valueFrom:
configMapKeyRef:
name: locust-config
key: OAUTH_TOKEN
- name: REGISTRY_AUTH_FILE
valueFrom:
configMapKeyRef:
name: locust-config
key: REGISTRY_AUTH_FILE

restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
role: locust-master
name: locust-master
namespace: NAMESPACE
spec:
ports:
- port: 5557
name: communication
- port: 5558
name: communication-plus-1
- port: 8089
targetPort: 8089
name: web-ui
selector:
role: locust-master