Configure Kubernetes OpenTelemetry auto-instrumentation for microservices observability

Intermediate 45 min Apr 19, 2026 116 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up OpenTelemetry Operator in Kubernetes to automatically instrument microservices with distributed tracing. Enable seamless observability across your application stack without modifying application code.

Prerequisites

  • Running Kubernetes cluster with kubectl access
  • Cluster admin permissions for CRD installation
  • At least 4GB available memory for telemetry components

What this solves

OpenTelemetry auto-instrumentation eliminates the manual work of adding tracing code to your microservices. The OpenTelemetry Operator automatically injects telemetry collection into your Kubernetes pods, giving you distributed traces, metrics, and logs without application changes. This approach works across multiple programming languages and provides a unified observability strategy for complex microservice architectures.

Step-by-step configuration

Install cert-manager prerequisite

The OpenTelemetry Operator requires cert-manager for TLS certificate management. Install it first if not already present.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

Wait for cert-manager pods to be ready:

kubectl wait --for=condition=ready pod -l app.kubernetes.io/instance=cert-manager -n cert-manager --timeout=300s

Install OpenTelemetry Operator

Deploy the OpenTelemetry Operator using the official manifest. This creates the necessary CRDs and operator deployment.

kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml

Verify the operator is running:

kubectl get pods -n opentelemetry-operator-system

Create OpenTelemetry Collector configuration

Set up a Collector instance to receive, process, and export telemetry data. This collector will handle traces from all instrumented applications.

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel-collector
  namespace: default
spec:
  mode: deployment
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
      jaeger:
        protocols:
          grpc:
            endpoint: 0.0.0.0:14250
          thrift_http:
            endpoint: 0.0.0.0:14268

    processors:
      batch:
        timeout: 1s
        send_batch_size: 1024
      memory_limiter:
        limit_mib: 512

    exporters:
      logging:
        loglevel: info
      jaeger:
        endpoint: jaeger-collector.jaeger:14250
        tls:
          insecure: true
      prometheus:
        endpoint: "0.0.0.0:8889"

    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
      pprof:
        endpoint: 0.0.0.0:1777
      zpages:
        endpoint: 0.0.0.0:55679

    service:
      extensions: [health_check, pprof, zpages]
      pipelines:
        traces:
          receivers: [otlp, jaeger]
          processors: [memory_limiter, batch]
          exporters: [logging, jaeger]
        metrics:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [logging, prometheus]
kubectl apply -f otel-collector.yaml

Install Jaeger for trace visualization

Deploy Jaeger to store and visualize distributed traces. This provides a web interface for exploring your application's trace data.

apiVersion: v1
kind: Namespace
metadata:
  name: jaeger
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jaeger
  namespace: jaeger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jaeger
  template:
    metadata:
      labels:
        app: jaeger
    spec:
      containers:
      - name: jaeger
        image: jaegertracing/all-in-one:1.50
        ports:
        - containerPort: 16686
          name: ui
        - containerPort: 14250
          name: grpc
        - containerPort: 14268
          name: http
        env:
        - name: COLLECTOR_OTLP_ENABLED
          value: "true"
---
apiVersion: v1
kind: Service
metadata:
  name: jaeger-ui
  namespace: jaeger
spec:
  selector:
    app: jaeger
  ports:
  - port: 16686
    targetPort: 16686
    name: ui
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: jaeger-collector
  namespace: jaeger
spec:
  selector:
    app: jaeger
  ports:
  - port: 14250
    targetPort: 14250
    name: grpc
  - port: 14268
    targetPort: 14268
    name: http
  type: ClusterIP
kubectl apply -f jaeger.yaml

Configure auto-instrumentation for Java applications

Create an Instrumentation resource that defines how Java applications should be automatically instrumented. This injects the OpenTelemetry Java agent into matching pods.

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: java-instrumentation
  namespace: default
spec:
  exporter:
    endpoint: http://otel-collector-collector.default.svc.cluster.local:4318
  propagators:
    - tracecontext
    - baggage
    - b3
  sampler:
    type: parentbased_traceidratio
    argument: "1"
  java:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:1.31.0
    env:
      - name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
        value: http://otel-collector-collector.default.svc.cluster.local:4318/v1/traces
      - name: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
        value: http://otel-collector-collector.default.svc.cluster.local:4318/v1/metrics
      - name: OTEL_SERVICE_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.labels['app']
      - name: OTEL_RESOURCE_ATTRIBUTES_POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: OTEL_RESOURCE_ATTRIBUTES
        value: k8s.namespace.name=$(OTEL_RESOURCE_ATTRIBUTES_NAMESPACE),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)
kubectl apply -f java-instrumentation.yaml

Configure auto-instrumentation for Node.js applications

Create a separate instrumentation configuration for Node.js applications. Each language requires its own instrumentation resource with language-specific settings.

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: nodejs-instrumentation
  namespace: default
spec:
  exporter:
    endpoint: http://otel-collector-collector.default.svc.cluster.local:4318
  propagators:
    - tracecontext
    - baggage
    - b3
  sampler:
    type: parentbased_traceidratio
    argument: "1"
  nodejs:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.44.0
    env:
      - name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
        value: http://otel-collector-collector.default.svc.cluster.local:4318/v1/traces
      - name: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
        value: http://otel-collector-collector.default.svc.cluster.local:4318/v1/metrics
      - name: OTEL_SERVICE_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.labels['app']
      - name: OTEL_RESOURCE_ATTRIBUTES
        value: k8s.namespace.name=$(OTEL_RESOURCE_ATTRIBUTES_NAMESPACE),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME)
kubectl apply -f nodejs-instrumentation.yaml

Deploy sample Java application with auto-instrumentation

Create a sample Java application to demonstrate auto-instrumentation. The annotation tells the operator to inject telemetry collection automatically.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: java-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: java-app
  template:
    metadata:
      labels:
        app: java-app
      annotations:
        instrumentation.opentelemetry.io/inject-java: "java-instrumentation"
    spec:
      containers:
      - name: app
        image: openjdk:11-jre-slim
        command: ["java", "-jar", "/app/app.jar"]
        ports:
        - containerPort: 8080
        env:
        - name: SERVER_PORT
          value: "8080"
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"
          requests:
            memory: "256Mi"
            cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
  name: java-app-service
  namespace: default
spec:
  selector:
    app: java-app
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP
kubectl apply -f java-app.yaml

Deploy sample Node.js application with auto-instrumentation

Create a Node.js application that will be automatically instrumented. This demonstrates multi-language support in the same cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nodejs-app
  template:
    metadata:
      labels:
        app: nodejs-app
      annotations:
        instrumentation.opentelemetry.io/inject-nodejs: "nodejs-instrumentation"
    spec:
      containers:
      - name: app
        image: node:18-alpine
        command: ["node", "server.js"]
        ports:
        - containerPort: 3000
        workingDir: /app
        env:
        - name: PORT
          value: "3000"
        resources:
          limits:
            memory: "256Mi"
            cpu: "250m"
          requests:
            memory: "128Mi"
            cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
  name: nodejs-app-service
  namespace: default
spec:
  selector:
    app: nodejs-app
  ports:
  - port: 80
    targetPort: 3000
  type: ClusterIP
kubectl apply -f nodejs-app.yaml

Configure network policies for telemetry traffic

Create network policies to allow telemetry data flow between instrumented applications and the collector while maintaining security.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-telemetry
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app.kubernetes.io/name: opentelemetry-collector
    ports:
    - protocol: TCP
      port: 4317
    - protocol: TCP
      port: 4318
  - to:
    - namespaceSelector:
        matchLabels:
          name: jaeger
    ports:
    - protocol: TCP
      port: 14250
    - protocol: TCP
      port: 14268
kubectl apply -f telemetry-network-policy.yaml

Set up Jaeger UI access

Create a port-forward to access the Jaeger UI for viewing distributed traces. This allows you to explore the telemetry data collected from your applications.

kubectl port-forward -n jaeger svc/jaeger-ui 16686:16686

Access Jaeger UI at http://localhost:16686 to view traces and service dependencies.

Configure sampling and resource allocation

Configure trace sampling rates

Adjust sampling rates to control the volume of trace data while maintaining observability coverage. Lower rates reduce overhead but may miss important traces.

kubectl patch instrumentation java-instrumentation --type='merge' -p='
{
  "spec": {
    "sampler": {
      "type": "parentbased_traceidratio",
      "argument": "0.1"
    }
  }
}'

Configure collector resource limits

Set appropriate resource limits for the OpenTelemetry Collector to handle your expected telemetry volume without impacting cluster performance.

kubectl patch opentelemetrycollector otel-collector --type='merge' -p='
{
  "spec": {
    "resources": {
      "limits": {
        "memory": "1Gi",
        "cpu": "500m"
      },
      "requests": {
        "memory": "512Mi",
        "cpu": "250m"
      }
    }
  }
}'

Verify your setup

Check that all OpenTelemetry components are running correctly:

kubectl get pods -n opentelemetry-operator-system
kubectl get pods -n jaeger
kubectl get pods -l app.kubernetes.io/name=opentelemetry-collector
kubectl get instrumentation

Verify that applications have been instrumented:

kubectl describe pod -l app=java-app | grep -A5 -B5 opentelemetry
kubectl describe pod -l app=nodejs-app | grep -A5 -B5 opentelemetry

Check collector logs for telemetry data:

kubectl logs -l app.kubernetes.io/name=opentelemetry-collector | head -20

Test trace generation by making requests to your applications:

kubectl port-forward svc/java-app-service 8080:80 &
curl http://localhost:8080/health
kill %1

Common issues

SymptomCauseFix
Pods not getting instrumentedMissing or incorrect annotationVerify annotation matches instrumentation name: kubectl get instrumentation
Collector not receiving tracesNetwork connectivity issuesCheck collector service: kubectl get svc -l app.kubernetes.io/name=opentelemetry-collector
High memory usage in collectorNo memory limits or batch processingConfigure memory_limiter processor and resource limits
Missing traces in JaegerSampling rate too lowIncrease sampler argument value in instrumentation config
Application startup failuresInstrumentation image compatibilityCheck instrumentation image version matches your runtime
Network policy blocking telemetryRestrictive egress rulesAllow egress to collector ports 4317/4318

Next steps

Running this in production?

Want this handled for you? Setting up OpenTelemetry once is straightforward. Keeping it patched, monitored, backed up and tuned across environments is the harder part. See how we run infrastructure like this for European SaaS and e-commerce teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.