Configure Istio distributed tracing with Jaeger and Zipkin for comprehensive microservices observability

Advanced 45 min Apr 23, 2026 12 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up comprehensive distributed tracing in your Istio service mesh using both Jaeger and Zipkin backends. Configure telemetry collection, trace sampling, and monitoring dashboards for full microservices observability.

Prerequisites

  • Kubernetes cluster with kubectl access
  • Istio service mesh installed
  • At least 8GB RAM available
  • Storage class for persistent volumes

What this solves

Distributed tracing tracks requests as they flow through multiple microservices, helping you identify bottlenecks, debug errors, and understand service dependencies. This tutorial configures Istio's telemetry system with both Jaeger and Zipkin to collect, store, and visualize traces from your service mesh.

Prerequisites

You need a Kubernetes cluster with Istio service mesh already installed. This builds on our Istio installation guide to add comprehensive tracing capabilities.

Step-by-step configuration

Install Jaeger tracing backend

Deploy Jaeger using the operator for production-grade distributed tracing with Elasticsearch storage.

kubectl create namespace observability
kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.51.0/jaeger-operator.yaml -n observability

Configure Jaeger with persistent storage

Create a Jaeger instance with Elasticsearch backend for long-term trace storage and high availability.

apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: jaeger-production
  namespace: istio-system
spec:
  strategy: production
  storage:
    type: elasticsearch
    options:
      es:
        server-urls: http://elasticsearch.observability.svc.cluster.local:9200
        index-prefix: jaeger
    esIndexCleaner:
      enabled: true
      numberOfDays: 7
      schedule: "55 23   *"
  query:
    replicas: 2
    resources:
      limits:
        cpu: 500m
        memory: 512Mi
      requests:
        cpu: 100m
        memory: 128Mi
  collector:
    replicas: 2
    resources:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 200m
        memory: 256Mi
kubectl apply -f jaeger-production.yaml

Deploy Elasticsearch for Jaeger storage

Set up Elasticsearch cluster to store Jaeger traces with proper resource allocation and persistence.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: observability
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport
        env:
        - name: discovery.type
          value: zen
        - name: cluster.name
          value: jaeger-cluster
        - name: network.host
          value: "0.0.0.0"
        - name: xpack.security.enabled
          value: "false"
        - name: ES_JAVA_OPTS
          value: "-Xms2g -Xmx2g"
        resources:
          limits:
            cpu: 2
            memory: 4Gi
          requests:
            cpu: 500m
            memory: 2Gi
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: observability
spec:
  selector:
    app: elasticsearch
  ports:
  - port: 9200
    targetPort: 9200
kubectl apply -f elasticsearch-jaeger.yaml

Install Zipkin tracing backend

Deploy Zipkin as an alternative tracing backend with in-memory storage for development and testing.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: zipkin
  namespace: istio-system
  labels:
    app: zipkin
spec:
  replicas: 2
  selector:
    matchLabels:
      app: zipkin
  template:
    metadata:
      labels:
        app: zipkin
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      containers:
      - name: zipkin
        image: openzipkin/zipkin:2.24
        ports:
        - containerPort: 9411
        env:
        - name: STORAGE_TYPE
          value: elasticsearch
        - name: ES_HOSTS
          value: http://elasticsearch.observability.svc.cluster.local:9200
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 200m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: zipkin
  namespace: istio-system
  labels:
    app: zipkin
spec:
  type: ClusterIP
  ports:
  - port: 9411
    targetPort: 9411
  selector:
    app: zipkin
kubectl apply -f zipkin-deployment.yaml

Configure Istio telemetry for distributed tracing

Set up Istio's telemetry system to collect traces and send them to both Jaeger and Zipkin backends.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: tracing-config
spec:
  values:
    pilot:
      traceSampling: 1.0
    global:
      meshConfig:
        defaultConfig:
          tracing:
            sampling: 100.0
            max_path_tag_length: 256
        extensionProviders:
        - name: jaeger
          envoyOtelAls:
            service: jaeger-collector.istio-system.svc.cluster.local
            port: 14268
        - name: zipkin
          zipkin:
            service: zipkin.istio-system.svc.cluster.local
            port: 9411
  meshConfig:
    defaultProviders:
      tracing:
      - jaeger
      - zipkin
istioctl install -f istio-tracing-config.yaml --skip-confirmation

Configure trace sampling policies

Set up intelligent sampling to balance trace coverage with storage costs and performance impact.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: tracing-config
  namespace: istio-system
spec:
  tracing:
  - providers:
    - name: jaeger
  - providers:
    - name: zipkin
  - randomSamplingPercentage: 1.0
  - customTags:
      http.method:
        header:
          name: ":method"
      http.status_code:
        header:
          name: ":status"
      user.id:
        header:
          name: "x-user-id"
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: high-traffic-sampling
  namespace: production
spec:
  tracing:
  - randomSamplingPercentage: 0.1
  - providers:
    - name: jaeger
kubectl apply -f telemetry-v2.yaml

Enable tracing for application namespaces

Configure specific namespaces to send traces with appropriate sampling rates for different environments.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: app-tracing
  namespace: default
spec:
  tracing:
  - providers:
    - name: jaeger
    - name: zipkin
  - randomSamplingPercentage: 10.0
  - customTags:
      app.version:
        literal:
          value: "v1.2.3"
      environment:
        literal:
          value: "staging"
---
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    istio-injection: enabled
    tracing.istio.io/sampling: "0.1"
---
apiVersion: v1
kind: Namespace
metadata:
  name: staging
  labels:
    istio-injection: enabled
    tracing.istio.io/sampling: "5.0"
kubectl apply -f namespace-tracing.yaml

Configure Jaeger access and security

Set up secure access to Jaeger UI with authentication and network policies.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: jaeger-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: https-jaeger
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: jaeger-tls-secret
    hosts:
    - jaeger.example.com
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: jaeger-vs
  namespace: istio-system
spec:
  hosts:
  - jaeger.example.com
  gateways:
  - jaeger-gateway
  http:
  - match:
    - uri:
        prefix: "/"
    route:
    - destination:
        host: jaeger-query.istio-system.svc.cluster.local
        port:
          number: 16686
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: jaeger-access
  namespace: istio-system
spec:
  selector:
    matchLabels:
      app: jaeger
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
  - to:
    - operation:
        methods: ["GET", "POST"]
kubectl apply -f jaeger-gateway.yaml

Configure Zipkin access

Set up access to Zipkin UI for alternative trace analysis and debugging workflows.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: zipkin-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: https-zipkin
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: zipkin-tls-secret
    hosts:
    - zipkin.example.com
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: zipkin-vs
  namespace: istio-system
spec:
  hosts:
  - zipkin.example.com
  gateways:
  - zipkin-gateway
  http:
  - match:
    - uri:
        prefix: "/"
    route:
    - destination:
        host: zipkin.istio-system.svc.cluster.local
        port:
          number: 9411
kubectl apply -f zipkin-gateway.yaml

Set up Grafana dashboards for trace metrics

Configure Grafana to display tracing metrics and service topology from both Jaeger and Zipkin data sources.

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: istio-system
data:
  datasources.yaml: |
    apiVersion: 1
    datasources:
    - name: Jaeger
      type: jaeger
      url: http://jaeger-query.istio-system.svc.cluster.local:16686
      access: proxy
      isDefault: false
      editable: true
    - name: Zipkin
      type: zipkin
      url: http://zipkin.istio-system.svc.cluster.local:9411
      access: proxy
      isDefault: false
      editable: true
    - name: Prometheus
      type: prometheus
      url: http://prometheus.istio-system.svc.cluster.local:9090
      access: proxy
      isDefault: true
      editable: true
kubectl apply -f grafana-tracing-datasource.yaml
kubectl rollout restart deployment/grafana -n istio-system

Deploy sample application for testing

Deploy a multi-service application to generate traces and test your distributed tracing setup.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      containers:
      - name: productpage
        image: docker.io/istio/examples-bookinfo-productpage-v1:1.18.0
        ports:
        - containerPort: 9080
        env:
        - name: JAEGER_AGENT_HOST
          value: "jaeger-agent.istio-system.svc.cluster.local"
        - name: JAEGER_AGENT_PORT
          value: "6831"
        - name: JAEGER_SAMPLER_TYPE
          value: "const"
        - name: JAEGER_SAMPLER_PARAM
          value: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: productpage
  namespace: default
spec:
  selector:
    app: productpage
  ports:
  - port: 9080
    targetPort: 9080
kubectl apply -f bookinfo-tracing.yaml

Configure advanced trace analysis

Set up trace correlation with logs

Configure log correlation to link traces with application logs for comprehensive debugging.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: log-correlation
  namespace: istio-system
spec:
  accessLogging:
  - providers:
    - name: otel
  tracing:
  - providers:
    - name: jaeger
  - customTags:
      trace_id:
        operation: INSERT
        header:
          name: "x-trace-id"
      span_id:
        operation: INSERT
        header:
          name: "x-span-id"
kubectl apply -f log-correlation.yaml

Configure performance monitoring

Set up alerts for trace latency, error rates, and service dependencies using our existing Istio monitoring setup.

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: tracing-alerts
  namespace: istio-system
spec:
  groups:
  - name: istio-tracing
    rules:
    - alert: HighTraceLatency
      expr: histogram_quantile(0.99, sum(rate(istio_request_duration_milliseconds_bucket{reporter="destination"}[5m])) by (le, destination_service_name)) > 1000
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "High trace latency detected"
        description: "Service {{ $labels.destination_service_name }} has 99th percentile latency > 1s"
    - alert: TraceErrorRate
      expr: sum(rate(istio_requests_total{reporter="destination",response_code!~"2.."}[5m])) by (destination_service_name) / sum(rate(istio_requests_total{reporter="destination"}[5m])) by (destination_service_name) > 0.1
      for: 2m
      labels:
        severity: critical
      annotations:
        summary: "High error rate in traces"
        description: "Service {{ $labels.destination_service_name }} has error rate > 10%"
kubectl apply -f trace-alerts.yaml

Verify your setup

Check that all tracing components are running and collecting traces properly.

# Check Jaeger pods
kubectl get pods -n istio-system -l app=jaeger

Check Zipkin deployment

kubectl get pods -n istio-system -l app=zipkin

Check Elasticsearch storage

kubectl get pods -n observability -l app=elasticsearch

Verify telemetry configuration

kubectl get telemetry -A

Test trace collection

kubectl exec -n default deployment/productpage-v1 -- curl -s http://productpage.default.svc.cluster.local:9080/productpage

Check Jaeger UI access

kubectl port-forward -n istio-system svc/jaeger-query 16686:16686

Visit http://localhost:16686

Check Zipkin UI access

kubectl port-forward -n istio-system svc/zipkin 9411:9411

Visit http://localhost:9411

Common issues

Symptom Cause Fix
No traces appearing Sampling rate too low Increase randomSamplingPercentage in Telemetry config
Jaeger UI not accessible Service not exposed correctly Check Gateway and VirtualService configuration
High storage usage No trace retention policy Configure esIndexCleaner in Jaeger spec
Missing trace data Istio proxy not injected Label namespace with istio-injection: enabled
Elasticsearch connection failed Network policy blocking access Allow traffic between namespaces: kubectl label namespace istio-system name=istio-system
Traces not correlated Missing trace headers Ensure applications propagate x-trace-id headers

Next steps

Running this in production?

Want this handled for you? Running distributed tracing at scale adds complexity around storage management, retention policies, performance tuning, and 24/7 monitoring. See how we run infrastructure like this for European teams building microservices platforms.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.