Skip to content

Commit af11312

Browse files
committed
Automated editorial review
1 parent c03a566 commit af11312

File tree

6 files changed

+87
-87
lines changed

6 files changed

+87
-87
lines changed

modules/ROOT/pages/distributed_tracing.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Span tags should apply to the _whole_ span. There is a list available at [semant
5151

5252
=== Logs
5353

54-
**Logs** are key:value pairs that are useful for capturing _timed_ log messages and other debugging or informational output from the application itself. Logs may be useful for documenting a specific moment or event within the span (in contrast to tags which should apply to the span regardless of time).
54+
**Logs** are key:value pairs that are useful for capturing _timed_ log messages and other debugging or informational output from the application itself. Logs may be useful for documenting a specific moment or event within the span (in contrast to tags that should apply to the span regardless of time).
5555

5656
=== Baggage Items
5757

@@ -76,7 +76,7 @@ Different `Tracer` implementations vary in how and what parameters they receive
7676

7777
Once a `Tracer` instance is obtained, it can be used to manually create `Span`, or pass it to existing instrumentation for frameworks and libraries.
7878

79-
In order to not force the user to keep around a `Tracer`, the `io.opentracing.util` artifact includes a helper `GlobalTracer` class implementing the `io.opentracing.Tracer` interface, which, as the name implies, acts as as a global instance that can be used from anywhere. It works by forwarding all operations to another underlying `Tracer`, that will get registered at some future point.
79+
In order to not force the user to keep around a `Tracer`, the `io.opentracing.util` artifact includes a helper `GlobalTracer` class implementing the `io.opentracing.Tracer` interface, which, as the name implies, acts as a global instance that can be used from anywhere. It works by forwarding all operations to another underlying `Tracer`, that will get registered at some future point.
8080

8181
By default, the underlying `Tracer` is a `no-nop` implementation.
8282

@@ -94,7 +94,7 @@ Another type of relationship is the `FollowsFrom` and is used in special cases w
9494
=== Propagating a Trace with Inject/Extract
9595

9696
In order to trace across process boundaries in distributed systems, services need to be able to continue the trace injected by the client that sent each request. OpenTracing allows this to happen by providing inject and extract methods that encode a span's context into a carrier.
97-
The `inject` method allows for the `SpanContext` to be passed on to a carrier. For example, passing the trace information into the client's request so that the server you send it to can continue the trace. The `extract` method does the exact opposite. It extract the `SpanContext` from the carrier. For example, if there was an active request on the client side, the developer must extract the `SpanContext` using the `io.opentracing.Tracer.extract` method.
97+
The `inject` method allows for the `SpanContext` to be passed on to a carrier. For example, passing the trace information into the client's request so that the server you send it to can continue the trace. The `extract` method does the exact opposite. It extracts the `SpanContext` from the carrier. For example, if there was an active request on the client side, the developer must extract the `SpanContext` using the `io.opentracing.Tracer.extract` method.
9898

9999

100100
image::Extract.png[Trace Propagation]

modules/ROOT/pages/index.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Learn how to:
2424

2525
Microservices provides a powerful architecture, but not without its own challenges, especially with regards to debugging and observing distributed transactions across complex networks — simply because there are no in-memory calls or stack traces to do so.
2626

27-
This is where distributed tracing comes into picture. Distributed tracing provides a solution for describing and analyzing the cross-process transactions. Some of the uses cases of distributed tracing as described in [Google’s Dapper paper](https://ai.google/research/pubs/pub36356) include anomaly detection, diagnosing steady state problems, distributed profiling, resource attribution and workload modeling of microservices.
27+
This is where distributed tracing comes into picture. Distributed tracing provides a solution for describing and analyzing the cross-process transactions. Some of the uses cases of distributed tracing as described in [Google’s Dapper paper](https://ai.google/research/pubs/pub36356) include anomaly detection, diagnosing steady-state problems, distributed profiling, resource attribution and workload modeling of microservices.
2828

2929
== Distributed Tracing: A Mental Model
3030
Most mental models for tracing descend from Google’s Dapper paper. OpenTracing uses similar nouns and verbs.
@@ -55,7 +55,7 @@ These three components have different requirements and drive the design of the D
5555
- Analysis system: A database and interactive UI for working with the trace data.
5656

5757
## How does OpenTracing fit into this?
58-
The OpenTracing API provides a standard, vendor neutral framework for instrumentation. This means that if a developer wants to try out a different distributed tracing system, then instead of repeating the whole instrumentation process for the new distributed tracing system, the developer can simply change the configuration of the Tracer.
58+
The OpenTracing API provides a standard, vendor-neutral framework for instrumentation. This means that if a developer wants to try out a different distributed tracing system, then instead of repeating the whole instrumentation process for the new distributed tracing system, the developer can simply change the configuration of the Tracer.
5959

6060

6161
NOTE: Content extracted from http://opentracing.io

modules/ROOT/pages/lab-jaeger-java.adoc

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Hello Carlos!
6666

6767
== Add client libraries
6868

69-
. Add the tracing client library for the java `service-a`, edit the file `service-a/pom.xml` and add the dependenceies for `opentracing-api`, `opentracing-spring-cloud-starter`, and `jaeger-client`
69+
. Add the tracing client library for the java `service-a`, edit the file `service-a/pom.xml` and add the dependencies for `opentracing-api`, `opentracing-spring-cloud-starter`, and `jaeger-client`
7070
The `pom.xml` section should look like the following:
7171
+
7272
[source, xml]
@@ -91,7 +91,7 @@ The `pom.xml` section should look like the following:
9191
----
9292

9393
[# tracing-every-http-request]
94-
== Tracing every http request
94+
== Tracing every HTTP request
9595

9696
. Add A Bean to initialized the Tracer in the main Class in the file `src/main/java/com/example/servicea/DemoApplication.java`
9797
+
@@ -142,7 +142,7 @@ open http://localhost:16686/jaeger
142142
+
143143
image::java-service-a-find-trace.png[]
144144

145-
. Click on one of the traces, then expand the trace's `Tags` and `Logs`. You should see information about the http request such as `http.method` set to `GET` and `http.status_code` set to `200`. The Logs section have two logs one with `preHandle` and the final log `afterCompletion` this gives you how much time the request took to be processed by your service business logic. In this example it took `8ms`.
145+
. Click on one of the traces, then expand the trace's `Tags` and `Logs`. You should see information about the HTTP request such as `http.method` set to `GET` and `http.status_code` set to `200`. The Logs section has two logs one with `preHandle` and the final log `afterCompletion` this gives you how much time the request took to be processed by your service business logic. In this example, it took `8ms`.
146146
+
147147
image::nodejs-service-a-trace-details.jpg[]
148148

@@ -159,11 +159,11 @@ HTTP/1.1 500
159159
+
160160
image::java-service-a-error.png[]
161161

162-
. Click on the trace with the `/error`, then expand the trace's `Tags` and `Logs`. You should see information about the trace such as the `http.status_code` se to `500`.
162+
. Click on the trace with the `/error`, then expand the trace's `Tags` and `Logs`. You should see information about the trace such as the `http.status_code` set to `500`.
163163
+
164164
image::java-service-a-error-details.png[]
165165

166-
== Finding slow http requests
166+
== Finding slow HTTP requests
167167

168168
In the `service-a` we have the API endpoint `/sayHello`, we used this endpoint in the previous section but called it only once. This endpoint has some strange behavior that not all responses are fast, very often the response is slow 100ms.
169169

@@ -202,7 +202,7 @@ Some traces are taking approximately 100ms and others are taking approximately 2
202202
You can see the pattern that only every 3rd request the response is slow.
203203
When troubleshooting we are interested first on the slowest requests, you can click on one of the traces on the graph, or you can sort in the table by `Longest First`.
204204

205-
. Select the trace that took the longest time 103ms, expand all the information for the single span operation `/sayHello` including tags and logs.
205+
. Select the trace that took the longest time 103ms, expand all the information for the single-span operation `/sayHello` including tags and logs.
206206
+
207207
image::java-service-a-slow-details.png[]
208208

@@ -254,12 +254,12 @@ open http://localhost:16686/jaeger
254254
image::java-service-a-fast.png[]
255255

256256
+
257-
You can see now that all http requests are fast and the problem is fixed
257+
You can see now that all HTTP requests are fast and the problem is fixed
258258

259259
+
260-
Cloud Native applications can be composed of microservices and each microservice handling multiple endpoints. Having the ability to have observability allows to narrow down to a specific service, and whithin that service a specific endpoint having problems, starting with a single trace and span you can increase the observability of your applications.
260+
Cloud Native applications can be composed of microservices and each microservice handling multiple endpoints. Having the ability to have observability allows us to narrow down to a specific service, and within that service a specific endpoint having problems, starting with a single trace and span you can increase the observability of your applications.
261261

262-
== Tracing an http handler
262+
== Tracing an HTTP handler
263263

264264
In the previous example, we were able to identify the endpoint `/sayHello` as one of interest in our service. Let's see how can we add tracing instrumentation to the function that is handling this endpoint.
265265

@@ -280,7 +280,7 @@ import io.opentracing.Tracer;
280280
private Tracer tracer;
281281
----
282282

283-
. Locate the method `sayHello` and and wrap the code in a try with a scope, this will create a new child span.
283+
. Locate the method `sayHello` and wrap the code in a try with a scope, this will create a new child span.
284284
+
285285
[source, java]
286286
----
@@ -305,7 +305,7 @@ public String sayHello(@PathVariable String name) {
305305
}
306306
----
307307

308-
. The opentracing API supports the method `log` you can log an event with a name and an object. Add a log to the span with a message that contains the value of the name.
308+
. The OpenTracing API supports the method `log` you can log an event with a name and an object. Add a log to the span with a message that contains the value of the name.
309309
+
310310
[source, java]
311311
----
@@ -324,7 +324,7 @@ public String sayHello(@PathVariable String name) {
324324
}
325325
----
326326

327-
. The opentracing API supports the method `setTag` you can tag the span with a key and any value. Add a tag that contains the response, in normal use cases you would not log the entire response and instead key values that are useful for later searching for spans. Since we are using `true` in `.startActive(true)` there is no need to call explicit `span.finish()`.
327+
. The OpenTracing API supports the method `setTag` you can tag the span with a key and any value. Add a tag that contains the response, in normal use cases you would not log the entire response and instead key values that are useful for later searching for spans. Since we are using `true` in `.startActive(true)` there is no need to call explicit `span.finish()`.
328328
+
329329
[source, java]
330330
----
@@ -386,7 +386,7 @@ Notice in the Logs section the log event with the name `name` and the message `t
386386

387387
== Tracing a function
388388

389-
The http handler usually calls other functions to perform the business logic, when calling another function within the same service you can create a child span.
389+
The HTTP handler usually calls other functions to perform the business logic, when calling another function within the same service you can create a child span.
390390

391391
. The `sayHello` handler calls the function `formatGreeting` to process the input `name`. In the method `formatGreeting` create a new span using `tracer.buildSpan` and name the span `format-greeting`.
392392
+
@@ -443,11 +443,11 @@ Notice the cascading effect between the three spans, the span `format-greeting`
443443

444444
== Distributing Tracing
445445

446-
You can have a single trace that goes across multiple services, this allows to distribute tracing and better observability on the interactions between services.
446+
You can have a single trace that goes across multiple services, this allows you to distribute tracing and better observability on the interactions between services.
447447

448-
In the previous example, we instrumented a single service `service-a`, and created span when calling a local function to format the greeting message.
448+
In the previous example, we instrumented a single service `service-a`, and created a span when calling a local function to format the greeting message.
449449

450-
For the following example, we are going to use a remote service `service-b` to format the message, and returning the formatted greeting message to the http client.
450+
For the following example, we are going to use a remote service `service-b` to format the message, and returning the formatted greeting message to the HTTP client.
451451

452452
. In the file `HelloController.java` locate the handler function `sayHello` and replace the function call `formatGreeting(name)` with `formatGreetingRemote(name)`.
453453
+
@@ -467,7 +467,7 @@ public String sayHello(@PathVariable String name) {
467467
}
468468
----
469469

470-
. In the method `formatGreetingRemote` the http requestis automatically instrumented, and the tracing headers inserted when calling the remote service `service-b` endpoint `/formatGreeting`.
470+
. In the method `formatGreetingRemote` the HTTP request is automatically instrumented, and the tracing headers inserted when calling the remote service `service-b` endpoint `/formatGreeting`.
471471
+
472472
[source, java]
473473
----
@@ -485,7 +485,7 @@ private String formatGreetingRemote(String name) {
485485
}
486486
----
487487

488-
. The service `service-b` is already instrumented to trace every http request using the same procedure <<tracing-every-http-request, Trace every http request>> that we did for service `service-a`.
488+
. The service `service-b` is already instrumented to trace every HTTP request using the same procedure <<tracing-every-http-request, Trace every HTTP request>> that we did for service `service-a`.
489489

490490
. Import at the top of the file `src/main/java/com/example/serviceb/FormatController.java` the `opentracing` libraries.
491491
+
@@ -504,7 +504,7 @@ import io.opentracing.Tracer;
504504
private Tracer tracer;
505505
----
506506

507-
. Located the http handler function `formatGreeting` in the file `FormatController.java`
507+
. Located the HTTP handler function `formatGreeting` in the file `FormatController.java`
508508
+
509509
[source, java]
510510
----
@@ -582,7 +582,7 @@ image::java-services-b-spans.png[]
582582
+
583583
Notice in the top section, the summary which includes the `Trace Start`, `Duration: 16ms`, `Services: 2`, `Depth: 5` and `Total Spans: 5`.
584584
+
585-
Notice the bottom section on how the total duration of 16ms is broken down per span, and at which time each span started and ended. You can see that the time spent in `service-b` was 5ms, meaning that for this single http request `service-a` spent 11ms and `service-b` spent 5ms.
585+
Notice the bottom section on how the total duration of 16ms is broken down per span, and at which time each span started and ended. You can see that the time spent in `service-b` was 5ms, meaning that for this single HTTP request `service-a` spent 11ms and `service-b` spent 5ms.
586586

587587
. Expand the `Logs` sections for both spans `say-hello` from `service-a` and `format-greeting` from `service-b`.
588588
+
@@ -595,7 +595,7 @@ Notice the time for the first log message `this is a log message for name Carlos
595595
+
596596
Notice the time for the second log message `formatting message remotely for name Carlos` in `service-b` is of 4.98ms, this means this log event happened 4.98ms after the trace started in `service-a`.
597597
+
598-
Is very useful to see the log events we instrumented in our endpoint handlers across services in this manner because it provides full observability of the lifecycle of the http request across multiple services.
598+
It is very useful to see the log events we instrumented in our endpoint handlers across services in this manner because it provides full observability of the lifecycle of the HTTP request across multiple services.
599599

600600
== Baggage propagation
601601

@@ -605,9 +605,9 @@ Baggage items are key:value string pairs that apply to the given Span, its SpanC
605605

606606
Baggage items enable powerful functionality given a full-stack OpenTracing integration (for example, arbitrary application data from a mobile app can make it, transparently, all the way into the depths of a storage system), and with it some powerful costs: use this feature with care.
607607

608-
Use this feature thoughtfully and with care. Every key and value is copied into every local and remote child of the associated Span, and that can add up to a lot of network and cpu overhead.
608+
Use this feature thoughtfully and with care. Every key and value is copied into every local and remote child of the associated Span, and that can add up to a lot of network and CPU overhead.
609609

610-
. Locate the http handler `sayHello` in the file `HelloControlle.java`. Use the method `span.setBaggageItem('my-baggage', name)` before the method call `formatGreetingRemote(name)` to set the baggage with key `my-baggage` to the value of the `name` parameter.
610+
. Locate the HTTP handler `sayHello` in the file `HelloController.java`. Use the method `span.setBaggageItem('my-baggage', name)` before the method call `formatGreetingRemote(name)` to set the baggage with key `my-baggage` to the value of the `name` parameter.
611611

612612
+
613613
[source, java]
@@ -627,7 +627,7 @@ public String sayHello(@PathVariable String name) {
627627
}
628628
----
629629

630-
. Locate the http handler `formatGreeting` in the file `FormatController.java`. Use the method `span.getBaggageItem('my-baggage')` to get the value of the name parameter at `service-a`. For convenience log the value using `span.log` to see the value in the Jaeger UI.
630+
. Locate the HTTP handler `formatGreeting` in the file `FormatController.java`. Use the method `span.getBaggageItem('my-baggage')` to get the value of the name parameter at `service-a`. For convenience log the value using `span.log` to see the value in the Jaeger UI.
631631
+
632632
[source, java]
633633
----
@@ -678,9 +678,9 @@ Notice that the baggage is set in the `service-a` with the value `Carlos` this b
678678

679679
If you have a specific trace id you can search for it by putting the trace id on the top left search box.
680680

681-
You can also use a tag to search for example searching traces that have a specific http status code, or one of the custom tags we added to a span.
681+
You can also use a tag to search for example searching traces that have a specific HTTP status code, or one of the custom tags we added to a span.
682682

683-
. To search for traces using http method `GET` and status code `200`, enter `http.status_code=200 http.method=GET` on the `Tags` field in the search form, and then click `Find Traces`.
683+
. To search for traces using HTTP method `GET` and status code `200`, enter `http.status_code=200 http.method=GET` on the `Tags` field in the search form, and then click `Find Traces`.
684684
+
685685
image::jaeger-ui-search.png[]
686686

0 commit comments

Comments
 (0)