Skip to content

Commit fb06b33

Browse files
Added gRPC telemetry instrumentation example. (#163)
* Added gRPC telemetry instrumentation example. * Added more comprehensive signal handler that covers more signals i.e. SIGINT, SIGTERM, SIGQUIT and that works on Windows, Linux, and Mac. Signed-off-by: Marvin Hansen <[email protected]> * Formatted code with rustfmt and updated Readme Signed-off-by: Marvin Hansen <[email protected]> * Improved docs and code formatting. Signed-off-by: Marvin Hansen <[email protected]> * swap around `async_trait` and `autometrics` attributes --------- Signed-off-by: Marvin Hansen <[email protected]> Co-authored-by: Mari <[email protected]>
1 parent 1c6ae63 commit fb06b33

File tree

9 files changed

+473
-0
lines changed

9 files changed

+473
-0
lines changed

examples/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ cargo run --package example-{name of example}
1414
- [custom-metrics](./custom-metrics/) - Define your own custom metrics alongside the ones generated by autometrics (using any of the metrics collection crates)
1515
- [exemplars-tracing](./exemplars-tracing/) - Use fields from `tracing::Span`s as Prometheus exemplars
1616
- [opentelemetry-push](./opentelemetry-push/) - Push metrics to an OpenTelemetry Collector via the OTLP HTTP or gRPC protocol using the Autometrics provided interface
17+
- [grpc-http](./grpc-http/) - Instrument Rust gRPC services with metrics using Tonic, warp, and Autometrics.
1718
- [opentelemetry-push-custom](./opentelemetry-push-custom/) - Push metrics to an OpenTelemetry Collector via the OTLP gRPC protocol using custom options
1819

1920
## Full Example

examples/grpc-http/Cargo.toml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
[package]
2+
name = "grpc-http"
3+
version = "0.0.0"
4+
publish = false
5+
edition = "2021"
6+
7+
[dependencies]
8+
autometrics = { path = "../../autometrics", features = ["prometheus-exporter"] }
9+
prost = "0.12"
10+
tokio = { version = "1", features = ["full"] }
11+
tonic = "0.10"
12+
tonic-health = "0.10"
13+
warp = "0.3"
14+
15+
[build-dependencies]
16+
tonic-build = "0.10"

examples/grpc-http/README.md

Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# gRPC service built with Tonic, http server build with warp, and Instrumented with Autometrics
2+
3+
This code example has been adapted and modified from a Blog post by Mies Hernandez van Leuffen: [Adding observability to Rust gRPC services using Tonic and Autometrics](https://autometrics.dev/blog/adding-observability-to-rust-grpc-services-using-tonic-and-autometrics).
4+
5+
## Overview
6+
7+
This example shows how to:
8+
* Add observability to a gRPC services
9+
* Add a http service
10+
* Start both, the grpc and http server
11+
* Add a graceful shutdown to both, grpc and http server
12+
* Closes a DB connection during graceful shutdown
13+
14+
### Install the protobuf compiler
15+
16+
The protobuf compiler (protoc) compiles protocol buffers into Rust code.
17+
Cargo will call protoc automatically during the build process, but you will
18+
get an error when protoc is not installed. Therefore, ensure protoc is installed.
19+
20+
The recommended installation for macOS is via [Homebrew](https://brew.sh/):
21+
22+
```bash
23+
brew install protobuf
24+
```
25+
Check if the installation worked correctly:
26+
27+
```bash
28+
protoc --version
29+
```
30+
31+
## Local Observability Development
32+
33+
The easiest way to get up and running with this application is to clone the repo and get a local Prometheus setup using the [Autometrics CLI](https://github.com/autometrics-dev/am).
34+
35+
Read more about Autometrics in Rust [here](https://github.com/autometrics-dev/autometrics-rs) and general docs [here](https://docs.autometrics.dev/).
36+
37+
38+
### Install the Autometrics CLI
39+
40+
The recommended installation for macOS is via [Homebrew](https://brew.sh/):
41+
42+
```
43+
brew install autometrics-dev/tap/am
44+
```
45+
46+
Alternatively, you can download the latest version from the [releases page](https://github.com/autometrics-dev/am/releases)
47+
48+
Spin up local Prometheus and start scraping your application that listens on port :8080.
49+
50+
```
51+
am start :8080
52+
```
53+
54+
If you now inspect the Autometrics explorer on `http://localhost:6789` you will see your metrics. However, upon first start, all matrics are
55+
empty because no request has been sent yet.
56+
57+
Now you can test your endpoints and generate some traffic and refresh the autometrics explorer to see you metrics.
58+
59+
### Starting the Service
60+
61+
```bash
62+
cargo run
63+
```
64+
65+
Expected output:
66+
67+
```
68+
Started gRPC server on port 50051
69+
Started metrics on port 8080
70+
Explore autometrics at http://127.0.0.1:6789
71+
```
72+
73+
### Stopping the Service
74+
75+
You can stop the service either via ctrl-c ore by sending a SIGTERM signal to kill the process. This has been implemented for Windows, Linux, Mac, and should also work on Docker and Kubernetes.
76+
77+
On Windows, Linux, or Mac, just hit Ctrl-C
78+
79+
Alternatively, you can send a SIGTERM signal from another process
80+
using the kill command on Linux or Mac.
81+
82+
In a second terminal, run
83+
84+
```bash
85+
ps | grep grpc-http
86+
```
87+
88+
Sample output:
89+
90+
```
91+
73014 ttys002 0:00.25 /Users/.../autometrics-rs/target/debug/grpc-http
92+
```
93+
94+
In this example, the service runs on PID 73014. Let's send a sigterm signal to shutdown the service. On you system, a different PID will be returned so please use that one instead.
95+
96+
```bash
97+
kill 73014
98+
```
99+
100+
Expected output:
101+
102+
```
103+
Received SIGTERM
104+
DB connection closed
105+
gRPC shutdown complete
106+
http shutdown complete
107+
```
108+
109+
110+
## Testing the GRPC endpoints
111+
112+
Easiest way to test the endpoints is with `grpcurl` (`brew install grpcurl`).
113+
114+
```bash
115+
grpcurl -plaintext -import-path ./proto -proto job.proto -d '{"name": "Tonic"}' 'localhost:50051' job.JobRunner.SendJob
116+
```
117+
118+
returns
119+
120+
```
121+
{
122+
"message": "Hello Tonic!"
123+
}
124+
```
125+
126+
Getting the list of jobs (currently hardcoded to return one job)
127+
128+
```bash
129+
grpcurl -plaintext -import-path ./proto -proto job.proto -d '{}' 'localhost:50051' job.JobRunner.ListJobs
130+
```
131+
132+
returns:
133+
134+
```
135+
{
136+
"job": [
137+
{
138+
"id": 1,
139+
"name": "test"
140+
}
141+
]
142+
}
143+
```
144+
145+
## Viewing the metrics
146+
147+
When you inspect the Autometrics explorer on `http://localhost:6789` you will see your metrics and SLOs. The explorer shows four tabs:
148+
149+
1) Dashboard: Aggregated overview of all metrics
150+
2) Functions: Detailed metrics for each instrumented API function
151+
3) SLO's: Service Level Agreements for each instrumented API function
152+
4) Alerts: Notifications of violated SLO's or any other anomaly.
153+

examples/grpc-http/build.rs

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
fn main() -> Result<(), Box<dyn std::error::Error>> {
2+
tonic_build::compile_protos("proto/job.proto")?;
3+
Ok(())
4+
}

examples/grpc-http/proto/job.proto

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
syntax = "proto3";
2+
package job;
3+
4+
service JobRunner {
5+
rpc SendJob (JobRequest) returns (JobReply);
6+
rpc ListJobs (Empty) returns (JobList);
7+
}
8+
9+
message Empty {}
10+
11+
message Job {
12+
int32 id = 1;
13+
string name = 2;
14+
15+
enum Status {
16+
NOT_STARTED = 0;
17+
RUNNING = 1;
18+
FINISHED = 2;
19+
}
20+
}
21+
22+
message JobRequest {
23+
string name = 1;
24+
}
25+
26+
message JobReply {
27+
string message = 1;
28+
29+
enum Status {
30+
NOT_STARTED = 0;
31+
RUNNING = 1;
32+
FINISHED = 2;
33+
}
34+
}
35+
36+
message JobList {
37+
repeated Job job = 1;
38+
}
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
use std::fmt::Error;
2+
3+
// Clone is required for the `tokio::signal::unix::SignalKind::terminate()` handler
4+
// Sometimes, you can't derive clone, then you have to wrap the DBManager in an Arc or Arc<Mutex>
5+
#[derive(Debug, Default, Clone, Copy)]
6+
pub struct DBManager {
7+
// Put your DB client here. For example:
8+
// db: rusqlite,
9+
}
10+
11+
impl DBManager {
12+
pub fn new() -> DBManager {
13+
DBManager {
14+
// Put your database client here. For example:
15+
// db: rusqlite::Connection::open(":memory:").unwrap();
16+
}
17+
}
18+
19+
pub async fn connect_to_db(&self) -> Result<(), Error> {
20+
Ok(())
21+
}
22+
23+
pub async fn close_db(&self) -> Result<(), Error> {
24+
Ok(())
25+
}
26+
27+
pub async fn query_table(&self) -> Result<(), Error> {
28+
println!("Query table");
29+
Ok(())
30+
}
31+
32+
pub async fn write_into_table(&self) -> Result<(), Error> {
33+
println!("Write into table");
34+
Ok(())
35+
}
36+
}

examples/grpc-http/src/main.rs

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
use std::net::SocketAddr;
2+
use tonic::transport::Server as TonicServer;
3+
use warp::Filter;
4+
5+
use autometrics::prometheus_exporter;
6+
use server::MyJobRunner;
7+
8+
use crate::server::job::job_runner_server::JobRunnerServer;
9+
10+
mod db_manager;
11+
mod server;
12+
mod shutdown;
13+
14+
#[tokio::main]
15+
async fn main() -> Result<(), Box<dyn std::error::Error>> {
16+
// Set up prometheus metrics exporter
17+
prometheus_exporter::init();
18+
19+
// Set up two different ports for gRPC and HTTP
20+
let grpc_addr = "127.0.0.1:50051"
21+
.parse()
22+
.expect("Failed to parse gRPC address");
23+
let web_addr: SocketAddr = "127.0.0.1:8080"
24+
.parse()
25+
.expect("Failed to parse web address");
26+
27+
// Build new DBManager that connects to the database
28+
let dbm = db_manager::DBManager::new();
29+
// Connect to the database
30+
dbm.connect_to_db()
31+
.await
32+
.expect("Failed to connect to database");
33+
34+
// gRPC server with DBManager
35+
let grpc_svc = JobRunnerServer::new(MyJobRunner::new(dbm));
36+
37+
// Sigint signal handler that closes the DB connection upon shutdown
38+
let signal = shutdown::grpc_sigint(dbm.clone());
39+
40+
// Construct health service for gRPC server
41+
let (mut health_reporter, health_svc) = tonic_health::server::health_reporter();
42+
health_reporter
43+
.set_serving::<JobRunnerServer<MyJobRunner>>()
44+
.await;
45+
46+
// Build gRPC server with health service and signal sigint handler
47+
let grpc_server = TonicServer::builder()
48+
.add_service(grpc_svc)
49+
.add_service(health_svc)
50+
.serve_with_shutdown(grpc_addr, signal);
51+
52+
// Build http /metrics endpoint
53+
let routes = warp::get()
54+
.and(warp::path("metrics"))
55+
.map(|| prometheus_exporter::encode_http_response());
56+
57+
// Build http web server
58+
let (_, web_server) =
59+
warp::serve(routes).bind_with_graceful_shutdown(web_addr, shutdown::http_sigint());
60+
61+
// Create handler for each server
62+
// https://github.com/hyperium/tonic/discussions/740
63+
let grpc_handle = tokio::spawn(grpc_server);
64+
let grpc_web_handle = tokio::spawn(web_server);
65+
66+
// Join all servers together and start the the main loop
67+
print_start(&web_addr, &grpc_addr);
68+
let _ = tokio::try_join!(grpc_handle, grpc_web_handle)
69+
.expect("Failed to start gRPC and http server");
70+
71+
Ok(())
72+
}
73+
74+
fn print_start(web_addr: &SocketAddr, grpc_addr: &SocketAddr) {
75+
println!();
76+
println!("Started gRPC server on port {:?}", grpc_addr.port());
77+
println!("Started metrics on port {:?}", web_addr.port());
78+
println!("Stop service with Ctrl+C");
79+
println!();
80+
println!("Explore autometrics at http://127.0.0.1:6789");
81+
println!();
82+
}

0 commit comments

Comments
 (0)