Skip to content

Commit 394ec36

Browse files
Restructure and enhance getting started documentation (#7113)
* Restructure and enhance getting started documentation Split the monolithic getting started guide into three focused documents: - Main index with overview and guide selection - Separate single-binary mode guide - Separate microservices mode guide Key improvements: - Clear comparison table to help users choose the right deployment mode - Step-by-step instructions with verification commands - Hands-on experiments to learn Cortex behavior - Comprehensive troubleshooting sections - Explanations of multi-tenancy and key concepts - Clear next steps and additional resources Signed-off-by: Charlie Le <[email protected]> * Update docs/getting-started/microservices.md Co-authored-by: Friedrich Gonzalez <[email protected]> Signed-off-by: Charlie Le <[email protected]> * Update installation instructions for cortextool Signed-off-by: Charlie Le <[email protected]> --------- Signed-off-by: Charlie Le <[email protected]> Co-authored-by: Friedrich Gonzalez <[email protected]>
1 parent 55dd382 commit 394ec36

File tree

4 files changed

+1076
-287
lines changed

4 files changed

+1076
-287
lines changed

docs/getting-started/_index.md

Lines changed: 47 additions & 287 deletions
Original file line numberDiff line numberDiff line change
@@ -6,315 +6,75 @@ no_section_index_title: true
66
slug: "getting-started"
77
---
88

9-
Cortex is a powerful platform software that can be run in two modes: as a single binary or as multiple
10-
independent [microservices](../architecture.md).
9+
Welcome to Cortex! This guide will help you get a Cortex environment up and running quickly.
1110

12-
There are two guides in this section:
11+
## What is Cortex?
1312

14-
1. [Single Binary Mode with Docker Compose](#single-binary-mode)
15-
2. [Microservice Mode with KIND](#microservice-mode)
13+
Cortex is a horizontally scalable, highly available, multi-tenant, long-term storage solution for Prometheus and OpenTelemetry Metrics. It can be run in two modes:
1614

17-
The single binary mode is useful for testing and development, while the microservice mode is useful for production.
15+
- **Single Binary Mode**: All components run in a single process - ideal for testing, development, and learning
16+
- **Microservices Mode**: Components run as independent services - designed for production deployments
1817

19-
Both guides will help you get started with Cortex using [blocks storage](../blocks-storage/_index.md).
18+
Both deployment modes use [blocks storage](../blocks-storage/_index.md), which is based on Prometheus TSDB and stores data in object storage (S3, GCS, Azure, or compatible services).
2019

21-
## Single Binary Mode
20+
## Choose Your Guide
2221

23-
This guide will help you get started with Cortex in single-binary mode using
24-
[blocks storage](../blocks-storage/_index.md).
22+
| Mode | Time | Use Case | Guide |
23+
|------|------|----------|-------|
24+
| **Single Binary** | ~15 min | Learning, Development, Testing | [Start Here →](single-binary.md) |
25+
| **Microservices** | ~30 min | Production-like Environment, Kubernetes | [Start Here →](microservices.md) |
2526

26-
### Prerequisites
27+
### Single Binary Mode
2728

28-
Cortex can be configured to use local storage or cloud storage (S3, GCS, and Azure). It can also utilize external
29-
Memcached and Redis instances for caching. This guide will focus on running Cortex as a single process with no
30-
dependencies.
29+
Perfect for your first experience with Cortex. Runs all components in one process with minimal dependencies.
3130

32-
* [Docker Compose](https://docs.docker.com/compose/install/)
31+
**What you'll set up:**
32+
- Cortex (single process)
33+
- Prometheus (sending metrics via remote_write)
34+
- Grafana (visualizing metrics)
35+
- SeaweedFS (S3-compatible storage)
3336

34-
### Running Cortex as a Single Instance
37+
**Requirements:**
38+
- Docker & Docker Compose
39+
- 4GB RAM, 10GB disk
3540

36-
For simplicity, we'll start by running Cortex as a single process with no dependencies. This mode is not recommended or
37-
intended for production environments or production use.
41+
[Get Started with Single Binary Mode →](single-binary.md)
3842

39-
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up:
43+
### Microservices Mode
4044

41-
1. An instance of [SeaweedFS](https://github.com/seaweedfs/seaweedfs/) for S3-compatible object storage
42-
1. An instance of [Cortex](https://cortexmetrics.io/) to receive metrics.
43-
1. An instance of [Prometheus](https://prometheus.io/) to send metrics to Cortex.
44-
1. An instance of [Perses](https://perses.dev) for latest trend on dashboarding
45-
1. An instance of [Grafana](https://grafana.com/) for legacy dashboarding
45+
Experience Cortex as it runs in production. Each component runs as a separate service in Kubernetes.
4646

47-
#### Instructions
47+
**What you'll set up:**
48+
- Cortex (distributed: ingester, querier, distributor, compactor, etc.)
49+
- Prometheus (sending metrics via remote_write)
50+
- Grafana (visualizing metrics)
51+
- SeaweedFS (S3-compatible storage)
4852

49-
```sh
50-
$ git clone https://github.com/cortexproject/cortex.git
51-
$ cd cortex/docs/getting-started
52-
```
53-
54-
**Note**: This guide uses `grafana-datasource-docker.yaml` which is specifically configured for the single binary Docker Compose deployment. For Kubernetes/microservices mode, use `grafana-datasource.yaml` instead.
55-
56-
##### Start the services
57-
58-
```sh
59-
$ docker compose up -d
60-
```
61-
62-
We can now access the following services:
63-
64-
* [Cortex](http://localhost:9009)
65-
* [Prometheus](http://localhost:9090)
66-
* [Grafana](http://localhost:3000)
67-
* [SeaweedFS](http://localhost:8333)
68-
69-
If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is
70-
configured to send metrics to Cortex via `remote_write`. Check out the `prometheus-config.yaml` file to see
71-
how this is configured.
72-
73-
#### Configure Cortex Recording Rules and Alerting Rules
74-
75-
We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
76-
77-
```sh
78-
# Configure recording rules for the Cortex tenant (optional)
79-
$ docker run --network host -v "$(pwd):/workspace" -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:9009
80-
```
81-
82-
#### Configure Cortex Alertmanager
83-
84-
Cortex also comes with a multi-tenant Alertmanager. Let's load configuration for it to be able to view them in Grafana.
85-
86-
```sh
87-
# Configure alertmanager for the Cortex tenant
88-
$ docker run --network host -v "$(pwd):/workspace" -w /workspace quay.io/cortexproject/cortex-tools:v0.17.0 alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:9009
89-
```
90-
91-
You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager).
92-
93-
There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex).
94-
95-
#### Explore
96-
97-
Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
98-
99-
```sh
100-
# Update the dashboards (optional)
101-
$ make
102-
```
103-
104-
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
105-
via `remote_write`!
106-
107-
Other things to explore:
108-
109-
- [Cortex](http://localhost:9009) - Administrative interface for Cortex
110-
- Try shutting down the ingester, and see how it affects metric ingestion.
111-
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
112-
- Does it affect the querying of metrics in Grafana?
113-
- [Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex
114-
- Try querying the metrics in Prometheus.
115-
- Are they the same as what you see in Cortex?
116-
- [Grafana](http://localhost:3000) - Grafana instance that is visualizing the metrics.
117-
- Try creating a new dashboard and adding a new panel with a query to Cortex.
118-
119-
### Clean up
120-
121-
```sh
122-
$ docker compose down -v
123-
```
124-
125-
## Microservice Mode
126-
127-
Now that you have Cortex running as a single instance, let's explore how to run Cortex in microservice mode.
128-
129-
### Prerequisites
130-
131-
* [Kind](https://kind.sigs.k8s.io)
132-
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
133-
* [Helm](https://helm.sh/docs/intro/install/)
134-
135-
This example uses [Kind](https://kind.sigs.k8s.io) to set up:
136-
137-
1. A Kubernetes cluster
138-
1. An instance of [SeaweedFS](https://github.com/seaweedfs/seaweedfs/) for S3-compatible object storage
139-
1. An instance of [Cortex](https://cortexmetrics.io/) to receive metrics
140-
1. An instance of [Prometheus](https://prometheus.io/) to send metrics to Cortex
141-
1. An instance of [Grafana](https://grafana.com/) to visualize the metrics
142-
143-
### Setup Kind
144-
145-
```sh
146-
$ kind create cluster
147-
```
148-
149-
### Configure Helm
150-
151-
```sh
152-
$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
153-
$ helm repo add grafana https://grafana.github.io/helm-charts
154-
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
155-
```
156-
157-
### Instructions
158-
159-
```sh
160-
$ git clone https://github.com/cortexproject/cortex.git
161-
$ cd cortex/docs/getting-started
162-
```
163-
164-
#### Configure SeaweedFS (S3)
165-
166-
```sh
167-
# Create a namespace
168-
$ kubectl create namespace cortex
169-
```
170-
171-
```sh
172-
# We can emulate S3 with SeaweedFS
173-
$ kubectl --namespace cortex apply -f seaweedfs.yaml --wait --timeout=5m
174-
```
53+
**Requirements:**
54+
- Kind, kubectl, Helm
55+
- 8GB RAM, 20GB disk
17556

176-
```sh
177-
# Wait for SeaweedFS to be ready
178-
$ sleep 5
179-
$ kubectl --namespace cortex wait --for=condition=ready pod -l app=seaweedfs --timeout=5m
180-
```
181-
182-
```sh
183-
# Port-forward to SeaweedFS to create a bucket
184-
$ kubectl --namespace cortex port-forward svc/seaweedfs 8333 &
185-
```
186-
187-
```sh
188-
# Create buckets in SeaweedFS
189-
$ for bucket in cortex-blocks cortex-ruler cortex-alertmanager; do
190-
curl --aws-sigv4 "aws:amz:local:seaweedfs" --user "any:any" -X PUT http://localhost:8333/$bucket
191-
done
192-
```
193-
194-
#### Setup Cortex
195-
196-
```sh
197-
# Deploy Cortex using the provided values file which configures
198-
# - blocks storage to use the seaweedfs service
199-
$ helm upgrade --install --version=2.4.0 --namespace cortex cortex cortex-helm/cortex -f cortex-values.yaml --wait
200-
```
201-
202-
#### Setup Prometheus
203-
204-
```sh
205-
# Deploy Prometheus to scrape metrics in the cluster and send them, via remote_write, to Cortex.
206-
$ helm upgrade --install --version=25.20.1 --namespace cortex prometheus prometheus-community/prometheus -f prometheus-values.yaml --wait
207-
```
208-
209-
If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is
210-
configured to send metrics to Cortex via `remote_write`. Check out the `prometheus-config.yaml` file to see
211-
how this is configured.
212-
213-
#### Setup Grafana
214-
215-
```sh
216-
# Deploy Grafana to visualize the metrics that were sent to Cortex.
217-
$ helm upgrade --install --version=7.3.9 --namespace cortex grafana grafana/grafana -f grafana-values.yaml --wait
218-
```
219-
220-
**Note**: This guide uses `grafana-values.yaml` with Helm to configure Grafana datasources. Alternatively, you can manually deploy Grafana with `grafana-datasource.yaml` which is specifically configured for Kubernetes/microservices mode with the correct `cortex-nginx` endpoints.
221-
222-
```sh
223-
# Create dashboards for Cortex
224-
$ for dashboard in $(ls dashboards); do
225-
basename=$(basename -s .json $dashboard)
226-
cmname=grafana-dashboard-$basename
227-
kubectl create --namespace cortex configmap $cmname --from-file=$dashboard=dashboards/$dashboard --save-config=true -o yaml --dry-run=client | kubectl apply -f -
228-
kubectl patch --namespace cortex configmap $cmname -p '{"metadata":{"labels":{"grafana_dashboard":""}}}'
229-
done
230-
231-
```
232-
233-
```sh
234-
# Port-forward to Grafana to visualize
235-
$ kubectl --namespace cortex port-forward deploy/grafana 3000 &
236-
```
237-
238-
View the dashboards in [Grafana](http://localhost:3000/dashboards?tag=cortex).
239-
240-
#### Configure Cortex Recording Rules and Alerting Rules (Optional)
241-
242-
We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
57+
[Get Started with Microservices Mode →](microservices.md)
24358

244-
```sh
245-
# Port forward to the alertmanager to configure recording rules and alerts
246-
$ kubectl --namespace cortex port-forward svc/cortex-nginx 8080:80 &
247-
```
248-
249-
```sh
250-
# Configure recording rules for the cortex tenant
251-
$ cortextool rules sync rules.yaml alerts.yaml --id cortex --address http://localhost:8080
252-
```
253-
254-
#### Configure Cortex Alertmanager (Optional)
255-
256-
Cortex also comes with a multi-tenant Alertmanager. Let's load configuration for it to be able to view them in Grafana.
257-
258-
```sh
259-
# Configure alertmanager for the cortex tenant
260-
$ cortextool alertmanager load alertmanager-config.yaml --id cortex --address http://localhost:8080
261-
```
262-
263-
You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager).
264-
265-
There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex).
266-
267-
#### Explore
268-
269-
Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
270-
271-
```sh
272-
# Update the dashboards (optional)
273-
$ make
274-
```
275-
276-
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
277-
via `remote_write`!
278-
279-
Other things to explore:
280-
281-
[Cortex](http://localhost:9009) - Administrative interface for Cortex
282-
283-
```sh
284-
# Port forward to the ingester to see the administrative interface for Cortex
285-
$ kubectl --namespace cortex port-forward deploy/cortex-ingester 9009:8080 &
286-
```
59+
## Key Concepts
28760

288-
- Try shutting down the ingester, and see how it affects metric ingestion.
289-
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
290-
- Does it affect the querying of metrics in Grafana?
61+
Before you begin, it's helpful to understand these core concepts:
29162

292-
[Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex
63+
- **Blocks Storage**: Cortex's storage engine based on Prometheus TSDB. Metrics are stored in 2-hour blocks in object storage.
64+
- **Multi-tenancy**: Cortex isolates metrics by tenant ID (sent via `X-Scope-OrgID` header). In these guides, we use `cortex` as the tenant ID.
65+
- **Remote Write**: Prometheus protocol for sending metrics to remote storage systems like Cortex.
66+
- **Components**: In microservices mode, Cortex runs as separate services (distributor, ingester, querier, etc.). In single binary mode, all run together.
29367

294-
```sh
295-
# Port forward to Prometheus to see the metrics that are being scraped
296-
$ kubectl --namespace cortex port-forward deploy/prometheus-server 9090 &
297-
```
298-
- Try querying the metrics in Prometheus.
299-
- Are they the same as what you see in Cortex?
300-
301-
[Grafana](http://localhost:3000) - Grafana instance that is visualizing the metrics.
68+
## Data Flow
30269

303-
```sh
304-
# Port forward to Grafana to visualize
305-
$ kubectl --namespace cortex port-forward deploy/grafana 3000 &
30670
```
307-
308-
- Try creating a new dashboard and adding a new panel with a query to Cortex.
309-
310-
### Clean up
311-
312-
```sh
313-
# Remove the port-forwards
314-
$ killall kubectl
71+
Prometheus → remote_write → Cortex → Object Storage (S3)
72+
73+
Grafana (queries via PromQL)
31574
```
31675

317-
```sh
318-
$ kind delete cluster
319-
```
76+
## Need Help?
32077

78+
- **Documentation**: Explore the [Architecture guide](../architecture.md) to understand Cortex's design
79+
- **Community**: Join the [CNCF Slack #cortex channel](https://cloud-native.slack.com/archives/cortex)
80+
- **Issues**: Report problems on [GitHub](https://github.com/cortexproject/cortex/issues)

docs/getting-started/grafana-values.yaml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -614,6 +614,16 @@ datasources:
614614
timeInterval: 15s
615615
secureJsonData:
616616
httpHeaderValue1: cortex
617+
- name: Cortex Alertmanager
618+
type: alertmanager
619+
url: http://cortex-nginx/api/prom
620+
access: proxy
621+
editable: true
622+
jsonData:
623+
httpHeaderName1: X-Scope-OrgID
624+
implementation: cortex
625+
secureJsonData:
626+
httpHeaderValue1: cortex
617627
# - name: CloudWatch
618628
# type: cloudwatch
619629
# access: proxy

0 commit comments

Comments
 (0)