You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -38,30 +38,19 @@ With OpenShift 4, you can use the https://cloud.redhat.com/openshift/install/crc
38
38
* https://docs.openshift.com/container-platform/4.1/service_mesh/service_mesh_install/preparing-ossm-installation.html[Preparing to install Red Hat OpenShift Service Mesh]
39
39
* https://docs.openshift.com/container-platform/4.1/service_mesh/service_mesh_install/installing-ossm.html[Installing Red Hat OpenShift Service Mesh]
40
40
41
-
[IMPORTANT]
42
-
Reducing resources when using local dev crc
43
-
====
44
-
When creating the ServiceMeshControlPlane edit the yaml for the pilot to specified a lower memory request, the default is 2Gi too high for the crc VM, the pilot section should look this:
45
-
[source, yaml]
46
-
----
47
-
pilot:
48
-
autoscaleEnabled: false
49
-
traceSampling: 100
50
-
resources:
51
-
requests:
52
-
cpu: 100m
53
-
memory: 128Mi
54
-
----
55
-
56
-
====
57
-
58
41
== Verify Service Mesh installation
59
42
60
43
. Verify that istio components are installed in the namespace `istio-system`
. Verify that the ServiceMeshMemberRoll includes the target namespace for example `default` as one of the `MEMBERS`
@@ -115,10 +110,16 @@ default [default bookinfo]
115
110
[source, bash]
116
111
----
117
112
oc get route -n istio-system
113
+
----
114
+
+
115
+
Verify the output, make sure `jaeger-query` is using `edge` for tls termination, if not you can use `oc edit service jaeger-query -n istio-system` and change it.
116
+
+
117
+
[source, bash]
118
+
----
118
119
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
In the yaml deployment manifest there are few items to point out:
@@ -251,11 +254,10 @@ In the yaml deployment manifest there are few items to point out:
251
254
** *Named service ports*
252
255
*** The service port name value start with `http`
253
256
** **Deployment with app and version labels**
254
-
*** THe Pod template should have a unique `app` label, and a `version`
257
+
*** THe Pod template should have the following labels defined `app` and `version`
255
258
256
-
+
257
-
The `pom.xml` for each service contains the dependency for zipkin to handle the B3 headers that Istio Envoy proxy forwards, this way allowing for end to end propagation. The dependencies related to opentracing in the file `pom.xml` for the service looks like this:
258
259
260
+
. The `pom.xml` for each service contains the dependency for zipkin to handle the B3 headers that Istio Envoy proxy forwards, this way allowing for end to end propagation. The dependencies related to opentracing in the file `pom.xml` for the service looks like this:
259
261
+
260
262
[source, xml]
261
263
----
@@ -290,7 +292,45 @@ The `pom.xml` for each service contains the dependency for zipkin to handle the
290
292
----
291
293
oc apply -f gateway.yaml -n default
292
294
----
293
-
295
+
+
296
+
Here is the content of `gateway.yaml`
297
+
+
298
+
[source, yaml]
299
+
----
300
+
apiVersion: networking.istio.io/v1alpha3
301
+
kind: Gateway
302
+
metadata:
303
+
name: distributing-tracing-gateway
304
+
spec:
305
+
selector:
306
+
istio: ingressgateway # use istio default controller
307
+
servers:
308
+
- port:
309
+
number: 80
310
+
name: http
311
+
protocol: HTTP
312
+
hosts:
313
+
- "*"
314
+
---
315
+
apiVersion: networking.istio.io/v1alpha3
316
+
kind: VirtualService
317
+
metadata:
318
+
name: distributing-tracing
319
+
spec:
320
+
hosts:
321
+
- "*"
322
+
gateways:
323
+
- distributing-tracing-gateway
324
+
http:
325
+
- match:
326
+
- uri:
327
+
prefix: /sayHello
328
+
route:
329
+
- destination:
330
+
host: service-a
331
+
port:
332
+
number: 8080
333
+
----
294
334
295
335
. Verify services are deployed and running:
296
336
+
@@ -310,7 +350,8 @@ NAME READY UP-TO-DATE AVAILABLE AGE
310
350
deployment.apps/service-a 1/1 1 1 6m7s
311
351
deployment.apps/service-b 1/1 1 1 6m44s
312
352
----
313
-
353
+
+
354
+
Notice that the under the `READY` column for pods, it shows that there are two (2/2) containers running, one of them is the istio side card proxy.
Notice in the output that the message was formatted by service-b
373
+
+
374
+
[source, bash]
375
+
----
329
376
Hello, from service-b Carlos!
330
377
----
331
378
+
332
379
From the result you can see that `service-a` calls `service-b` and replies back.
333
380
334
381
. In the Jaeger UI select `istio-ingressgateway` or `service-a` and click **Find Traces**
335
382
+
336
-
image::istio-jaeger-traces.png[]
383
+
image::istio-java-jaeger-traces.png[]
337
384
+
338
385
You can see 7 Spans in a single trace starting from the `istio-ingressgateway` ending in `service-b.default`
339
386
340
387
. Click on one of the traces and expand the spans in the trace
341
388
+
342
-
image::istio-jaeger-spans.png[]
389
+
image::istio-java-jaeger-spans.png[]
343
390
+
344
-
Check one of the labs xref:lab-jaeger-nodejs.adoc[Lab Jaeger - Node.js] or xref:lab-jaeger-java.adoc[Lab Jaeger - Java] for a more in depth lab for Opentracing with Jaeger.
391
+
Check one of the labs xref:lab-jaeger-nodejs.adoc[Lab Jaeger - Node.js] or xref:lab-java-jaeger-java.adoc[Lab Jaeger - Java] for a more in depth lab for Opentracing with Jaeger.
345
392
346
393
. In the Kiali UI select Graph to see a topology view of the services, you can enable traffic animation under Display to see the flow of http requests
347
394
+
348
-
image::istio-kiali.png[]
395
+
image::istio-java-kiali.png[]
349
396
350
397
. In the Grafana UI select the Dashboard *Istio Workload Dashboard* or *Istio Service Dashboard* to see monitoring and metrics data for your services
0 commit comments