You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Restructure and enhance getting started documentation (#7113)
* Restructure and enhance getting started documentation
Split the monolithic getting started guide into three focused documents:
- Main index with overview and guide selection
- Separate single-binary mode guide
- Separate microservices mode guide
Key improvements:
- Clear comparison table to help users choose the right deployment mode
- Step-by-step instructions with verification commands
- Hands-on experiments to learn Cortex behavior
- Comprehensive troubleshooting sections
- Explanations of multi-tenancy and key concepts
- Clear next steps and additional resources
Signed-off-by: Charlie Le <[email protected]>
* Update docs/getting-started/microservices.md
Co-authored-by: Friedrich Gonzalez <[email protected]>
Signed-off-by: Charlie Le <[email protected]>
* Update installation instructions for cortextool
Signed-off-by: Charlie Le <[email protected]>
---------
Signed-off-by: Charlie Le <[email protected]>
Co-authored-by: Friedrich Gonzalez <[email protected]>
Cortex is a powerful platform software that can be run in two modes: as a single binary or as multiple
10
-
independent [microservices](../architecture.md).
9
+
Welcome to Cortex! This guide will help you get a Cortex environment up and running quickly.
11
10
12
-
There are two guides in this section:
11
+
## What is Cortex?
13
12
14
-
1.[Single Binary Mode with Docker Compose](#single-binary-mode)
15
-
2.[Microservice Mode with KIND](#microservice-mode)
13
+
Cortex is a horizontally scalable, highly available, multi-tenant, long-term storage solution for Prometheus and OpenTelemetry Metrics. It can be run in two modes:
16
14
17
-
The single binary mode is useful for testing and development, while the microservice mode is useful for production.
15
+
-**Single Binary Mode**: All components run in a single process - ideal for testing, development, and learning
16
+
-**Microservices Mode**: Components run as independent services - designed for production deployments
18
17
19
-
Both guides will help you get started with Cortex using [blocks storage](../blocks-storage/_index.md).
18
+
Both deployment modes use [blocks storage](../blocks-storage/_index.md), which is based on Prometheus TSDB and stores data in object storage (S3, GCS, Azure, or compatible services).
20
19
21
-
## Single Binary Mode
20
+
## Choose Your Guide
22
21
23
-
This guide will help you get started with Cortex in single-binary mode using
24
-
[blocks storage](../blocks-storage/_index.md).
22
+
| Mode | Time | Use Case | Guide |
23
+
|------|------|----------|-------|
24
+
|**Single Binary**|~15 min | Learning, Development, Testing |[Start Here →](single-binary.md)|
25
+
|**Microservices**|~30 min | Production-like Environment, Kubernetes |[Start Here →](microservices.md)|
25
26
26
-
### Prerequisites
27
+
### Single Binary Mode
27
28
28
-
Cortex can be configured to use local storage or cloud storage (S3, GCS, and Azure). It can also utilize external
29
-
Memcached and Redis instances for caching. This guide will focus on running Cortex as a single process with no
30
-
dependencies.
29
+
Perfect for your first experience with Cortex. Runs all components in one process with minimal dependencies.
**Note**: This guide uses `grafana-datasource-docker.yaml` which is specifically configured for the single binary Docker Compose deployment. For Kubernetes/microservices mode, use `grafana-datasource.yaml` instead.
55
-
56
-
##### Start the services
57
-
58
-
```sh
59
-
$ docker compose up -d
60
-
```
61
-
62
-
We can now access the following services:
63
-
64
-
*[Cortex](http://localhost:9009)
65
-
*[Prometheus](http://localhost:9090)
66
-
*[Grafana](http://localhost:3000)
67
-
*[SeaweedFS](http://localhost:8333)
68
-
69
-
If everything is working correctly, Prometheus should be sending metrics that it is scraping to Cortex. Prometheus is
70
-
configured to send metrics to Cortex via `remote_write`. Check out the `prometheus-config.yaml` file to see
71
-
how this is configured.
72
-
73
-
#### Configure Cortex Recording Rules and Alerting Rules
74
-
75
-
We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
76
-
77
-
```sh
78
-
# Configure recording rules for the Cortex tenant (optional)
You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager).
92
-
93
-
There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex).
94
-
95
-
#### Explore
96
-
97
-
Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
98
-
99
-
```sh
100
-
# Update the dashboards (optional)
101
-
$ make
102
-
```
103
-
104
-
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
105
-
via `remote_write`!
106
-
107
-
Other things to explore:
108
-
109
-
-[Cortex](http://localhost:9009) - Administrative interface for Cortex
110
-
- Try shutting down the ingester, and see how it affects metric ingestion.
111
-
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
112
-
- Does it affect the querying of metrics in Grafana?
113
-
-[Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex
114
-
- Try querying the metrics in Prometheus.
115
-
- Are they the same as what you see in Cortex?
116
-
-[Grafana](http://localhost:3000) - Grafana instance that is visualizing the metrics.
117
-
- Try creating a new dashboard and adding a new panel with a query to Cortex.
118
-
119
-
### Clean up
120
-
121
-
```sh
122
-
$ docker compose down -v
123
-
```
124
-
125
-
## Microservice Mode
126
-
127
-
Now that you have Cortex running as a single instance, let's explore how to run Cortex in microservice mode.
**Note**: This guide uses `grafana-values.yaml` with Helm to configure Grafana datasources. Alternatively, you can manually deploy Grafana with `grafana-datasource.yaml` which is specifically configured for Kubernetes/microservices mode with the correct `cortex-nginx` endpoints.
View the dashboards in [Grafana](http://localhost:3000/dashboards?tag=cortex).
239
-
240
-
#### Configure Cortex Recording Rules and Alerting Rules (Optional)
241
-
242
-
We can configure Cortex with [cortextool](https://github.com/cortexproject/cortex-tools/) to load [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/). This is optional, but it is helpful to see how Cortex can be configured to manage rules and alerts.
57
+
[Get Started with Microservices Mode →](microservices.md)
243
58
244
-
```sh
245
-
# Port forward to the alertmanager to configure recording rules and alerts
You can configure Alertmanager in [Grafana as well](http://localhost:3000/alerting/notifications?search=&alertmanager=Cortex%20Alertmanager).
264
-
265
-
There's a list of recording rules and alerts that should be visible in Grafana [here](http://localhost:3000/alerting/list?view=list&search=datasource:Cortex).
266
-
267
-
#### Explore
268
-
269
-
Grafana is configured to use Cortex as a data source. Grafana is also configured with [Cortex Dashboards](http://localhost:3000/dashboards?tag=cortex) to understand the state of the Cortex instance. The dashboards are generated from the cortex-jsonnet repository. There is a Makefile in the repository that can be used to update the dashboards.
270
-
271
-
```sh
272
-
# Update the dashboards (optional)
273
-
$ make
274
-
```
275
-
276
-
If everything is working correctly, then the metrics seen in Grafana were successfully sent from Prometheus to Cortex
277
-
via `remote_write`!
278
-
279
-
Other things to explore:
280
-
281
-
[Cortex](http://localhost:9009) - Administrative interface for Cortex
282
-
283
-
```sh
284
-
# Port forward to the ingester to see the administrative interface for Cortex
- Try shutting down the ingester, and see how it affects metric ingestion.
289
-
- Restart Cortex to bring the ingester back online, and see how Prometheus catches up.
290
-
- Does it affect the querying of metrics in Grafana?
61
+
Before you begin, it's helpful to understand these core concepts:
291
62
292
-
[Prometheus](http://localhost:9090) - Prometheus instance that is sending metrics to Cortex
63
+
-**Blocks Storage**: Cortex's storage engine based on Prometheus TSDB. Metrics are stored in 2-hour blocks in object storage.
64
+
-**Multi-tenancy**: Cortex isolates metrics by tenant ID (sent via `X-Scope-OrgID` header). In these guides, we use `cortex` as the tenant ID.
65
+
-**Remote Write**: Prometheus protocol for sending metrics to remote storage systems like Cortex.
66
+
-**Components**: In microservices mode, Cortex runs as separate services (distributor, ingester, querier, etc.). In single binary mode, all run together.
293
67
294
-
```sh
295
-
# Port forward to Prometheus to see the metrics that are being scraped
0 commit comments