Integrate Jaeger with Kubernetes and Istio service mesh for distributed tracing

Advanced 45 min Apr 21, 2026 98 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Deploy Jaeger operator on Kubernetes with Istio telemetry integration for comprehensive distributed tracing across microservices. Configure Elasticsearch backend for production-grade trace storage and implement automated service discovery.

Prerequisites

  • Kubernetes cluster v1.24+
  • Istio service mesh installed
  • kubectl cluster-admin access
  • Helm 3.x installed
  • 4GB+ available memory

What this solves

Distributed tracing provides complete visibility into request flows across microservices in your Kubernetes cluster. This tutorial integrates Jaeger with Istio service mesh to automatically capture traces from all service communication without code changes. You'll configure the Jaeger operator, set up Elasticsearch for trace storage, and enable Istio's telemetry v2 for seamless trace collection across your entire service mesh.

Prerequisites

Before starting, verify your environment meets these requirements:

  • Kubernetes cluster running version 1.24 or later
  • Istio service mesh installed and configured
  • kubectl access with cluster-admin privileges
  • Helm 3.x installed on your local machine
  • At least 4GB available memory for Elasticsearch

Step-by-step installation

Install the Jaeger operator

The Jaeger operator manages the lifecycle of Jaeger instances and provides custom resource definitions for configuration.

kubectl create namespace observability
kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.51.0/jaeger-operator.yaml -n observability

Verify operator installation

Wait for the operator pod to reach running status before proceeding.

kubectl get pods -n observability -l name=jaeger-operator
kubectl logs -n observability deployment/jaeger-operator

Deploy Elasticsearch for trace storage

Elasticsearch provides scalable storage for traces with query capabilities. Create a production-ready Elasticsearch deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: observability
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        ports:
        - containerPort: 9200
        - containerPort: 9300
        env:
        - name: discovery.type
          value: single-node
        - name: ES_JAVA_OPTS
          value: "-Xms2g -Xmx2g"
        - name: xpack.security.enabled
          value: "false"
        resources:
          requests:
            memory: 2Gi
            cpu: 500m
          limits:
            memory: 4Gi
            cpu: 1000m
        volumeMounts:
        - name: elasticsearch-storage
          mountPath: /usr/share/elasticsearch/data
      volumes:
      - name: elasticsearch-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: observability
spec:
  selector:
    app: elasticsearch
  ports:
  - port: 9200
    targetPort: 9200
    name: http
  - port: 9300
    targetPort: 9300
    name: transport

Apply the Elasticsearch configuration

Deploy Elasticsearch and wait for it to become available before configuring Jaeger.

kubectl apply -f elasticsearch-deployment.yaml
kubectl wait --for=condition=available deployment/elasticsearch -n observability --timeout=300s

Create Jaeger instance with Elasticsearch backend

Configure Jaeger to use Elasticsearch for trace storage with production-ready settings.

apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: jaeger-production
  namespace: observability
spec:
  strategy: production
  storage:
    type: elasticsearch
    options:
      es:
        server-urls: http://elasticsearch:9200
        index-prefix: jaeger
        username: ""
        password: ""
        tls:
          enabled: false
    esIndexCleaner:
      enabled: true
      numberOfDays: 7
      schedule: "55 23   *"
  collector:
    replicas: 2
    resources:
      requests:
        memory: 512Mi
        cpu: 200m
      limits:
        memory: 1Gi
        cpu: 500m
  query:
    replicas: 2
    resources:
      requests:
        memory: 256Mi
        cpu: 100m
      limits:
        memory: 512Mi
        cpu: 200m
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx"
    hosts:
    - jaeger.example.com

Deploy the Jaeger instance

Apply the Jaeger configuration and verify all components are running correctly.

kubectl apply -f jaeger-production.yaml
kubectl get jaeger -n observability
kubectl get pods -n observability -l app.kubernetes.io/instance=jaeger-production

Configure Istio telemetry v2 for tracing

Enable distributed tracing in Istio to automatically send traces to Jaeger. This configuration applies to the entire mesh.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: tracing-config
  namespace: istio-system
spec:
  values:
    pilot:
      traceSampling: 100.0
    global:
      tracer:
        zipkin:
          address: jaeger-production-collector.observability:9411
  meshConfig:
    extensionProviders:
    - name: jaeger
      zipkin:
        service: jaeger-production-collector.observability
        port: 9411
    defaultProviders:
      tracing:
      - jaeger

Apply Istio tracing configuration

Update your Istio installation to enable tracing across the service mesh.

kubectl apply -f istio-tracing.yaml
kubectl rollout restart deployment/istiod -n istio-system

Create telemetry configuration

Configure Istio's telemetry v2 to specify tracing behavior and sampling rates for your applications.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: default
  namespace: istio-system
spec:
  tracing:
  - providers:
    - name: jaeger
  - randomSamplingPercentage: 100.0
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: namespace-tracing
  namespace: production
spec:
  tracing:
  - providers:
    - name: jaeger
  - randomSamplingPercentage: 10.0

Apply telemetry configuration

Deploy the telemetry configuration to enable tracing with appropriate sampling rates.

kubectl apply -f telemetry-config.yaml

Deploy sample microservices

Create test applications to verify distributed tracing functionality across service communication.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
        version: v1
    spec:
      containers:
      - name: frontend
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: production
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      containers:
      - name: backend
        image: httpd:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: production
spec:
  selector:
    app: backend
  ports:
  - port: 80
    targetPort: 80

Deploy and inject sidecars

Create the production namespace with automatic sidecar injection and deploy the sample applications.

kubectl create namespace production
kubectl label namespace production istio-injection=enabled
kubectl apply -f sample-apps.yaml

Configure service mesh networking

Create virtual services and destination rules to enable communication between microservices with tracing.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: frontend
  namespace: production
spec:
  hosts:
  - frontend
  http:
  - route:
    - destination:
        host: frontend
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: frontend
  namespace: production
spec:
  host: frontend
  subsets:
  - name: v1
    labels:
      version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: backend
  namespace: production
spec:
  hosts:
  - backend
  http:
  - route:
    - destination:
        host: backend
        subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: backend
  namespace: production
spec:
  host: backend
  subsets:
  - name: v1
    labels:
      version: v1

Apply networking configuration

Deploy the service mesh networking rules to enable proper traffic flow and tracing.

kubectl apply -f service-mesh-config.yaml

Configure distributed tracing for microservices

Enable application-level tracing headers

Configure your applications to propagate tracing headers for complete request visibility across service boundaries.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tracing-headers
  namespace: production
data:
  nginx.conf: |
    server {
        listen 80;
        location / {
            proxy_set_header X-Request-ID $request_id;
            proxy_set_header X-B3-TraceId $http_x_b3_traceid;
            proxy_set_header X-B3-SpanId $http_x_b3_spanid;
            proxy_set_header X-B3-ParentSpanId $http_x_b3_parentspanid;
            proxy_set_header X-B3-Sampled $http_x_b3_sampled;
            proxy_set_header X-B3-Flags $http_x_b3_flags;
            proxy_pass http://backend;
        }
    }

Configure trace sampling policies

Set up intelligent sampling to balance observability with performance and storage costs.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: high-priority-tracing
  namespace: production
spec:
  selector:
    matchLabels:
      app: critical-service
  tracing:
  - randomSamplingPercentage: 100.0
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: standard-tracing
  namespace: production
spec:
  tracing:
  - randomSamplingPercentage: 1.0

Apply sampling configuration

Deploy the sampling policies to optimize trace collection based on service criticality.

kubectl apply -f sampling-config.yaml

Create service performance monitoring

Set up service monitors to track distributed tracing metrics and system health.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: jaeger-metrics
  namespace: observability
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: jaeger
  endpoints:
  - port: admin-http
    path: /metrics
    interval: 15s
---
apiVersion: v1
kind: Service
metadata:
  name: jaeger-collector-metrics
  namespace: observability
  labels:
    app.kubernetes.io/name: jaeger
spec:
  selector:
    app.kubernetes.io/component: collector
  ports:
  - name: admin-http
    port: 14269
    targetPort: 14269

Verify your setup

Test the complete distributed tracing pipeline to ensure traces are properly collected and stored.

# Check Jaeger components status
kubectl get pods -n observability -l app.kubernetes.io/instance=jaeger-production

Verify Elasticsearch is receiving traces

kubectl exec -n observability deployment/elasticsearch -- curl -s "localhost:9200/jaeger-*/_count"

Check Istio sidecar injection

kubectl get pods -n production -o jsonpath='{range .items[]}{.metadata.name}{"\t"}{.spec.containers[].name}{"\n"}{end}'

Generate test traffic

kubectl exec -n production deployment/frontend -- curl -s http://backend

Port forward to access Jaeger UI

kubectl port-forward -n observability svc/jaeger-production-query 16686:16686
Note: Access the Jaeger UI at http://localhost:16686 to view traces. You should see spans from both frontend and backend services with complete request flows.

Configure retention and performance

Set up automated index management

Configure Elasticsearch index lifecycle policies to manage storage costs and performance.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: jaeger-index-cleaner
  namespace: observability
spec:
  schedule: "0 1   *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: index-cleaner
            image: curlimages/curl:latest
            command:
            - /bin/sh
            - -c
            - |
              # Delete indices older than 7 days
              indices=$(curl -s "elasticsearch:9200/_cat/indices/jaeger-*?h=index" | grep jaeger-span- | awk '{print $1}')
              for index in $indices; do
                date_part=$(echo $index | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}')
                if [ -n "$date_part" ]; then
                  index_date=$(date -d "$date_part" +%s)
                  cutoff_date=$(date -d "7 days ago" +%s)
                  if [ $index_date -lt $cutoff_date ]; then
                    curl -X DELETE "elasticsearch:9200/$index"
                  fi
                fi
              done
          restartPolicy: OnFailure

Apply retention policies

Deploy the automated cleanup job to maintain optimal Elasticsearch performance.

kubectl apply -f index-policy.yaml

Security and production hardening

Enable TLS for Jaeger components

Secure communication between Jaeger components with TLS encryption.

apiVersion: v1
kind: Secret
metadata:
  name: jaeger-tls
  namespace: observability
type: kubernetes.io/tls
data:
  tls.crt: LS0tLS1CRUdJTi... # base64 encoded certificate
  tls.key: LS0tLS1CRUdJTi... # base64 encoded private key
---
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: jaeger-production
  namespace: observability
spec:
  strategy: production
  collector:
    options:
      collector:
        grpc-tls:
          enabled: true
          cert: /tls/tls.crt
          key: /tls/tls.key
    volumeMounts:
    - name: tls-volume
      mountPath: /tls
      readOnly: true
    volumes:
    - name: tls-volume
      secret:
        secretName: jaeger-tls

Common issues

SymptomCauseFix
No traces appearing in Jaeger UIIstio sidecar not injectedVerify namespace has istio-injection=enabled label
Elasticsearch connection refusedService not ready or network policyCheck kubectl get svc -n observability elasticsearch
Jaeger collector pods crashingInsufficient memory allocationIncrease memory limits in Jaeger CR to 2Gi
Traces missing spans from servicesSampling rate too lowIncrease randomSamplingPercentage in Telemetry config
High storage consumptionNo index cleanup configuredEnable esIndexCleaner in Jaeger configuration

Next steps

Running this in production?

Want this handled for you? Running this at scale adds a second layer of work: capacity planning, failover drills, cost control, and on-call. See how we run infrastructure like this for European teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.