Skip to content

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

autometrics-dev/autometrics-rs

Repository files navigation

GitHub_headerImage

Documentation Crates.io Discord Shield

Autometrics is an open source framework that makes it easy to understand the health and performance of your code in production.

The Rust library provides a macro that makes it trivial to track the most useful metrics for any function: request rate, error rate, and lantency. It then generates Prometheus queries to help you understand the data collected and inserts links to the live charts directly into each function's doc comments.

Autometrics also provides Grafana dashboards to get an overview of instrumented functions and enables you to create powerful alerts based on Service-Level Objectives (SLOs) directly in your source code.

use autometrics::autometrics;

#[autometrics]
pub async fn create_user() {
  // Now this function will have metrics!
}

Here is a demo of jumping from function docs to live Prometheus charts:

autometrics.mp4

Features

  • ✨ #[autometrics] macro instruments any function or impl block to track the most useful metrics
  • πŸ’‘ Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • πŸ”— Injects links to live Prometheus charts directly into each function's doc comments
  • 🚨 Define alerts using SLO best practices directly in your source code
  • πŸ“Š Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
  • βš™οΈ Configurable metric collection library (opentelemetry, prometheus, or metrics)
  • ⚑ Minimal runtime overhead

See Why Autometrics? for more details on the ideas behind autometrics.

Examples

To see autometrics in action:

  1. Install prometheus locally
  2. Run the complete example:
cargo run -p example-full-api
  1. Hover over the function names to see the generated query links (like in the image above) and try clicking on them to go straight to that Prometheus chart.

See the other examples for details on how to use the various features and integrations.

Or run the example in Gitpod:

Open in Gitpod

Exporting Prometheus Metrics

Prometheus works by polling an HTTP endpoint on your server to collect the current values of all the metrics it has in memory.

For projects not currently using Prometheus metrics

Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

In your Cargo.toml file, enable the optional prometheus-exporter feature:

autometrics = { version = "*", features = ["prometheus-exporter"] }

Then, call the global_metrics_exporter function in your main function:

pub fn main() {
  let _exporter = autometrics::global_metrics_exporter();
  // ...
}

And create a route on your API (probably mounted under /metrics) that returns the following:

pub fn get_metrics() -> (http::StatusCode, String) {
  match autometrics::encode_global_metrics() {
    Ok(metrics) => (http::StatusCode::OK, metrics),
    Err(err) => (http::StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
  }
}

For projects already using custom Prometheus metrics

Autometrics uses existing metrics libraries (see below) to produce and collect metrics.

If you are already using one of these to collect metrics, simply configure autometrics to use the same library and the metrics it produces will be exported alongside yours. You do not need to use the Prometheus exporter functions this library provides and you do not need a separate endpoint for autometrics' metrics.

Dashboards

Autometrics provides Grafana dashboards that will work for any project instrumented with the library.

Alerts / SLOs

Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.

This works using pre-defined Prometheus alerting rules, which can be loaded via the rule_files field in your Prometheus configuration. By default, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.

To use autometrics SLOs and alerts, create one or multiple Objectives based on the function(s) success rate and/or latency, as shown below. The Objective can be passed as an argument to the autometrics macro to include the given function in that objective.

use autometrics::autometrics;
use autometrics::objectives::{Objective, ObjectiveLatency, ObjectivePercentile};

const API_SLO: Objective = Objective::new("api")
    .success_rate(ObjectivePercentile::P99_9)
    .latency(ObjectiveLatency::Ms250, ObjectivePercentile::P99);

#[autometrics(objective = API_SLO)]
pub fn api_handler() {
  // ...
}

Once you've added objectives to your code, you can use the Autometrics Service-Level Objectives(SLO) Dashboard to visualize the current status of your objective(s).

Configuring Autometrics

Custom Prometheus URL

Autometrics creates Prometheus query links that point to http://localhost:9090 by default but you can configure it to use a custom URL using an environment variable in your build.rs file:

// build.rs

fn main() {
  let prometheus_url = "https://your-prometheus-url.example";
  println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}

When using Rust Analyzer, you may need to reload the workspace in order for URL changes to take effect.

The Prometheus URL is only included in documentation comments so changing it will have no impact on the final compiled binary.

Feature flags

  • prometheus-exporter - exports a Prometheus metrics collector and exporter (compatible with any of the Metrics Libraries)
  • custom-objective-latency - by default, Autometrics only supports a fixed set of latency thresholds for objectives. Enable this to use custom latency thresholds. Note, however, that the custom latency must match one of the buckets configured for your histogram or the alerts will not work. This is not currently compatible with the prometheus or prometheus-exporter feature.
  • custom-objective-percentile by default, Autometrics only supports a fixed set of objective percentiles. Enable this to use a custom percentile. Note, however, that using custom percentiles requires generating a different recording and alerting rules file using the CLI + Sloth (see here).

Metrics Libraries

Configure the crate that autometrics will use to produce metrics by using one of the following feature flags:

  • opentelemetry (enabled by default) - use the opentelemetry crate for producing metrics
  • metrics - use the metrics crate for producing metrics
  • prometheus - use the prometheus crate for producing metrics

About

Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry.

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Contributors 18

Languages