NOTE: Applications deployed in this repository are not meant or configured for production.
- INSTALL SCRIPTS MUST BE RAN AGAINST AN EKS CLUSTER. We use IRSA to talk to AWS services.
- Components are installed as ArgoCD Applications.
- Files under the
/packagesdirectory are meant to be suable without any modifications. This means certain configuration options like domain name must be passed outside of this directory. e.g. use ArgoCD's Helm parameters.
We could probably deploy everything as a ArgoCD's app of apps with sync wave and what not. TODOooo
Currently handled outside of repository and set via bash script. Secrets such as GitHub token and TLS private keys are stored in the /private directory.
May use sealed secrets with full GitOps approach in the future. TODO
- Github ORGANIZATION
- An existing EKS cluster version (1.27+)
- AWS CLI (2.13+)
- Kubectl CLI (1.27+)
- jq
- git
- curl
- kustomize
- node + npm (if you choose to create GitHub App via CLI)
We strongly encourage you to create a dedicated GitHub organization. If you don't have an organization for this purpose, please follow this link to create one.
There are two ways to create GitHub integration with Backstage. You can use the Backstage CLI, or create it manually. See this page for more information on creating one manually. Once the app is created, place it under the private directory with the name github-integration.yaml.
To create one with the CLI, follow the steps below.
npx '@backstage/cli' create-github-app ${GITHUB_ORG_NAME}
# If prompted, select all for permissions or select permissions listed in this page https://backstage.io/docs/integrations/github/github-apps#app-permissions
# In the browser window, allow access to all repositories then install the app.
# move it to a "private" location.
mkdir -p private
GITHUB_APP_FILE=$(ls github-app-* | head -n1)
mv ${GITHUB_APP_FILE} private/github-integration.yamlThe file created above contains credentials. Handle it with care.
The rest of the installation process assumes the GitHub app credentials are available at private/github-integration.yaml
If you want to delete the GitHUb application, follow these steps.
A GitHub token is needed by ArgoCD to get information about repositories under your Organization.
The following permissions are needed:
- Repository access for all repositories
- Read-only access to: Administration, Contents, and Metadata. Get your GitHub personal access token from: https://github.com/settings/tokens?type=beta
Once you have your token, save it under the private directory with the name github-token. For example:
# From the root of this repository.
$ mkdir -p private
$ vim private/github-token # paste your token
# example output
$ cat private/github-token
github_pat_ABCDEDFEINDK....After creating your dedicated GitHub organization, check Settings > Personal access tokens > Settings and edit below configuration
- Fine-grained personal access tokens, select: Allow access via fine-grained personal access tokens
- Require approval of fine-grained personal access tokens, select: Do not require administrator approval
- Personal access token (classic), select: Allow access via personal access tokens (classic)
- Click Save
- Clone this repo locally. E.g.
git clone [email protected]:cnoe-io/reference-implementation-aws-user-friendly.git - Create a new PRIVATE repo under your newly created GitHub organization.
- Add a new remote
git remote add github-org <GITHUB_URL>For example:git remote add github-org [email protected]:manabuOrg/reference-implementation-aws-user-friendly.git - Push it to the new remote.
git push github-org - Update the
GITHUB_URLvalue in the config file.
IF YOU DON'T HAVE A DOMAIN
- Create a domain in Supernova. Choose
peopleas the organization. Read the Supernova wiki for more information. - Create a new SUB domain for this purpose and delegate it to a Route53 zone. Read the wiki.
- Use the Route53 zone id for the rest of the process.
READ THE SECTION ABOVE.
- Create GitHub apps and GitHub token as described above.
- Create a new EKS cluster. You can use an existing cluster but we cannot guarantee any existing resources will work with the script. You can create a new basic cluster with the included
eksctl.yamlfile:eksctl create -f eksctl.yamlYou can get eksctl from this link. - If you don't have a public registered Route53 zone, register a Route53 domain (be sure to use Route53 as the DNS service for the domain). We strongly encourage creating a dedicated sub domain for this. If you'd prefer managing DNS somewhere else, set
MANAGED_DNS=false - Get the host zone id and put it in the config file. e.g.
aws route53 list-hosted-zones-by-name --dns-name <YOUR_DOMAIN_NAME> --query 'HostedZones[0].Id' --output text | cut -d'/' -f3 # in the setups/config file, update the zone id. HOSTEDZONE_ID=ZO020111111
- Update the
setups/configfile with your own values. - Run
setups/install.shand follow the prompts. See the section below about monitoring installation progress. - Once installation completes, navigate to
idp.<DOMAIN_NAME>and log in asuser1. Password is available as a secret. You may need to wait for DNS propagation to complete to be able to login. May take ~10 minutes.kubectl get secrets -n keycloak keycloak-user-config -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Components are installed as ArgoCD Applications. You can monitor installation progress by going to ArgoCD UI.
# Get the admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
kubectl port-forward svc/argocd-server -n argocd 8081:80Go to http://localhost:8081 and login with the username admin and password obtained above. In the UI you can look at resources created, their logs, and events.
If you set MANAGED_DNS=false, you are responsible for updating DNS records, thus external-dns is not installed. You have to set the following DNS records:
idp.<DOMAIN_NAME>keycloak.<DOMAIN_NAME>argo.<DOMAIN_NAME>argocd.<DOMAIN_NAME>
Point these records to the value returned by the following command.
k get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'If you set MANAGED_CERT=false, you are responsible for managing TLS certs, thus cert-manager is not installed. You must create TLS secrets accordingly.
Run the following command to find where to create secrets.
output=$(kubectl get ingress --all-namespaces -o json | jq -r '.items[] | "\(.metadata.namespace) \(.spec.rules[].host) \(.spec.tls[].secretName)"')
echo -e "Namespace \t Hostname \t TLS Secret Name"
echo -e "$output"Secret format should be something like:
apiVersion: v1
kind: Secret
metadata:
name: idp.<DOMAIN>
namespace: backstage
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
The following components are installed if you chose the full installation option.
| Name | Version |
|---|---|
| argo-workflows | v3.4.8 |
| argocd | v2.7.6 |
| aws-load-balancer-controller | v2.5.3 |
| backstage | v1.16.0 |
| cert-manager | v1.12.2 |
| crossplane | v1.12.2 |
| external-dns | v0.13.5 |
| ingress-nginx | v1.8.0 |
| keycloak | v22.0.0 |
| spark-operator | v1beta2-1.3.8-3.1.1 |
| external-secrets | v0.9.2 |
If full installation is done, you should have these DNS entries available. They all point to the Network Load Balancer.
idp.<DOMAIN_NAME>argo.<DOMAIN_NAME>keycloak.<DOMAIN_NAME>
You can confirm these by querying at a register.
dig A `idp.<DOMAIN_NAME>` @1.1.1.1
kubectl get svc -n ingress-nginxHTTPS endpoints are also created with valid certificates.
openssl s_client -showcerts -servername id.<DOMAIN_NAME> -connect id.<DOMAIN_NAME>:443 <<< "Q"
curl https://idp.id.<DOMAIN_NAME>When you open a browser window and go to https://idp.<DOMAIN_NAME>, you should be prompted to login.
Two users are created during the installation process: user1 and user2. Their passwords are available in the keycloak namespace.
k get secrets -n keycloak keycloak-user-config -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'- Run
setups/uninstall.shand follow the prompts. - Remove GitHub app from your Organization by following these steps.
- Remove token from your GitHub Organization by following these steps.
- Remove the created GitHub Organization.
Uninstall details
Currently resources created by applications are not deleted. For example, if you have Spark Jobs running, they are not deleted and may block deletion of the spark-operator app.
TODOOOO
- by default it uses http-01 challenge. If you'd prefer using dns-01, you can update the ingress files. TODO AUTOMATE THIS
- You may get events like
Get "http://<DOMAIN>/.well-known/acme-challenge/09yldI6tVRvtWVPyMfwCwsYdOCEGGVWhmb1PWzXwhXI": dial tcp: lookup <DOMAIN> on 10.100.0.10:53: no such host. This is due to DNS propagation delay. It may take ~10 minutes.
See the troubleshooting doc for more information.
Click to expand
-
Route53 records. Route53 hosted zones are not created. You must also register it if you want to be able to access through public DNS. These are managed by the external DNS controller.
-
AWS Network Load Balancer. This is just the entrance to the Kubernetes cluster. This points to the default installation of Ingress Nginx and is managed by AWS Load Balancer Controller.
-
TLS Certificates issued by Let's Encrypt. These are managed by cert-manager based on values in Ingress. They use the production issuer which means we must be very careful with how many and often we request certificates from them. The uninstall scripts backup certificates to the
privatedirectory to avoid re-issuing certificates.
These resources are controlled by Kubernetes controllers and thus should be deleted using controllers.
If using keycloak SSO with fully automated DNS and certificate management, it must be:
- aws-load-balancer-controller
- ingress-nginx
- cert-manager
- external-dns
- keycloak
- The rest of stuff
If using keycloak SSO but manage DNS records and certificates manually.
- aws-load-balancer-controller
- ingress-nginx
- The rest of stuff minus cert-manager and external-dns
In this case, you can issue your own certs and provide them as TLS secrets as specified in the spec.tls[0].secretName field of Ingress objects.
You can also let NLB or ALB terminate TLS instead using the LB controller. This is not covered currently, but possible.
If no SSO, no particular installation order. Eventual consistency works.
