Deploy Jaeger operator on Kubernetes with Istio telemetry integration for comprehensive distributed tracing across microservices. Configure Elasticsearch backend for production-grade trace storage and implement automated service discovery.
Prerequisites
- Kubernetes cluster v1.24+
- Istio service mesh installed
- kubectl cluster-admin access
- Helm 3.x installed
- 4GB+ available memory
What this solves
Distributed tracing provides complete visibility into request flows across microservices in your Kubernetes cluster. This tutorial integrates Jaeger with Istio service mesh to automatically capture traces from all service communication without code changes. You'll configure the Jaeger operator, set up Elasticsearch for trace storage, and enable Istio's telemetry v2 for seamless trace collection across your entire service mesh.
Prerequisites
Before starting, verify your environment meets these requirements:
- Kubernetes cluster running version 1.24 or later
- Istio service mesh installed and configured
- kubectl access with cluster-admin privileges
- Helm 3.x installed on your local machine
- At least 4GB available memory for Elasticsearch
Step-by-step installation
Install the Jaeger operator
The Jaeger operator manages the lifecycle of Jaeger instances and provides custom resource definitions for configuration.
kubectl create namespace observability
kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.51.0/jaeger-operator.yaml -n observability
Verify operator installation
Wait for the operator pod to reach running status before proceeding.
kubectl get pods -n observability -l name=jaeger-operator
kubectl logs -n observability deployment/jaeger-operator
Deploy Elasticsearch for trace storage
Elasticsearch provides scalable storage for traces with query capabilities. Create a production-ready Elasticsearch deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
- name: xpack.security.enabled
value: "false"
resources:
requests:
memory: 2Gi
cpu: 500m
limits:
memory: 4Gi
cpu: 1000m
volumeMounts:
- name: elasticsearch-storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: 9200
name: http
- port: 9300
targetPort: 9300
name: transport
Apply the Elasticsearch configuration
Deploy Elasticsearch and wait for it to become available before configuring Jaeger.
kubectl apply -f elasticsearch-deployment.yaml
kubectl wait --for=condition=available deployment/elasticsearch -n observability --timeout=300s
Create Jaeger instance with Elasticsearch backend
Configure Jaeger to use Elasticsearch for trace storage with production-ready settings.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-production
namespace: observability
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
index-prefix: jaeger
username: ""
password: ""
tls:
enabled: false
esIndexCleaner:
enabled: true
numberOfDays: 7
schedule: "55 23 *"
collector:
replicas: 2
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 500m
query:
replicas: 2
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 200m
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
hosts:
- jaeger.example.com
Deploy the Jaeger instance
Apply the Jaeger configuration and verify all components are running correctly.
kubectl apply -f jaeger-production.yaml
kubectl get jaeger -n observability
kubectl get pods -n observability -l app.kubernetes.io/instance=jaeger-production
Configure Istio telemetry v2 for tracing
Enable distributed tracing in Istio to automatically send traces to Jaeger. This configuration applies to the entire mesh.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: tracing-config
namespace: istio-system
spec:
values:
pilot:
traceSampling: 100.0
global:
tracer:
zipkin:
address: jaeger-production-collector.observability:9411
meshConfig:
extensionProviders:
- name: jaeger
zipkin:
service: jaeger-production-collector.observability
port: 9411
defaultProviders:
tracing:
- jaeger
Apply Istio tracing configuration
Update your Istio installation to enable tracing across the service mesh.
kubectl apply -f istio-tracing.yaml
kubectl rollout restart deployment/istiod -n istio-system
Create telemetry configuration
Configure Istio's telemetry v2 to specify tracing behavior and sampling rates for your applications.
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: default
namespace: istio-system
spec:
tracing:
- providers:
- name: jaeger
- randomSamplingPercentage: 100.0
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: namespace-tracing
namespace: production
spec:
tracing:
- providers:
- name: jaeger
- randomSamplingPercentage: 10.0
Apply telemetry configuration
Deploy the telemetry configuration to enable tracing with appropriate sampling rates.
kubectl apply -f telemetry-config.yaml
Deploy sample microservices
Create test applications to verify distributed tracing functionality across service communication.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- name: frontend
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: production
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
version: v1
spec:
containers:
- name: backend
image: httpd:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: production
spec:
selector:
app: backend
ports:
- port: 80
targetPort: 80
Deploy and inject sidecars
Create the production namespace with automatic sidecar injection and deploy the sample applications.
kubectl create namespace production
kubectl label namespace production istio-injection=enabled
kubectl apply -f sample-apps.yaml
Configure service mesh networking
Create virtual services and destination rules to enable communication between microservices with tracing.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: frontend
namespace: production
spec:
hosts:
- frontend
http:
- route:
- destination:
host: frontend
subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: frontend
namespace: production
spec:
host: frontend
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: backend
namespace: production
spec:
hosts:
- backend
http:
- route:
- destination:
host: backend
subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: backend
namespace: production
spec:
host: backend
subsets:
- name: v1
labels:
version: v1
Apply networking configuration
Deploy the service mesh networking rules to enable proper traffic flow and tracing.
kubectl apply -f service-mesh-config.yaml
Configure distributed tracing for microservices
Enable application-level tracing headers
Configure your applications to propagate tracing headers for complete request visibility across service boundaries.
apiVersion: v1
kind: ConfigMap
metadata:
name: tracing-headers
namespace: production
data:
nginx.conf: |
server {
listen 80;
location / {
proxy_set_header X-Request-ID $request_id;
proxy_set_header X-B3-TraceId $http_x_b3_traceid;
proxy_set_header X-B3-SpanId $http_x_b3_spanid;
proxy_set_header X-B3-ParentSpanId $http_x_b3_parentspanid;
proxy_set_header X-B3-Sampled $http_x_b3_sampled;
proxy_set_header X-B3-Flags $http_x_b3_flags;
proxy_pass http://backend;
}
}
Configure trace sampling policies
Set up intelligent sampling to balance observability with performance and storage costs.
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: high-priority-tracing
namespace: production
spec:
selector:
matchLabels:
app: critical-service
tracing:
- randomSamplingPercentage: 100.0
---
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: standard-tracing
namespace: production
spec:
tracing:
- randomSamplingPercentage: 1.0
Apply sampling configuration
Deploy the sampling policies to optimize trace collection based on service criticality.
kubectl apply -f sampling-config.yaml
Create service performance monitoring
Set up service monitors to track distributed tracing metrics and system health.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: jaeger-metrics
namespace: observability
spec:
selector:
matchLabels:
app.kubernetes.io/name: jaeger
endpoints:
- port: admin-http
path: /metrics
interval: 15s
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-collector-metrics
namespace: observability
labels:
app.kubernetes.io/name: jaeger
spec:
selector:
app.kubernetes.io/component: collector
ports:
- name: admin-http
port: 14269
targetPort: 14269
Verify your setup
Test the complete distributed tracing pipeline to ensure traces are properly collected and stored.
# Check Jaeger components status
kubectl get pods -n observability -l app.kubernetes.io/instance=jaeger-production
Verify Elasticsearch is receiving traces
kubectl exec -n observability deployment/elasticsearch -- curl -s "localhost:9200/jaeger-*/_count"
Check Istio sidecar injection
kubectl get pods -n production -o jsonpath='{range .items[]}{.metadata.name}{"\t"}{.spec.containers[].name}{"\n"}{end}'
Generate test traffic
kubectl exec -n production deployment/frontend -- curl -s http://backend
Port forward to access Jaeger UI
kubectl port-forward -n observability svc/jaeger-production-query 16686:16686
Configure retention and performance
Set up automated index management
Configure Elasticsearch index lifecycle policies to manage storage costs and performance.
apiVersion: batch/v1
kind: CronJob
metadata:
name: jaeger-index-cleaner
namespace: observability
spec:
schedule: "0 1 *"
jobTemplate:
spec:
template:
spec:
containers:
- name: index-cleaner
image: curlimages/curl:latest
command:
- /bin/sh
- -c
- |
# Delete indices older than 7 days
indices=$(curl -s "elasticsearch:9200/_cat/indices/jaeger-*?h=index" | grep jaeger-span- | awk '{print $1}')
for index in $indices; do
date_part=$(echo $index | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}')
if [ -n "$date_part" ]; then
index_date=$(date -d "$date_part" +%s)
cutoff_date=$(date -d "7 days ago" +%s)
if [ $index_date -lt $cutoff_date ]; then
curl -X DELETE "elasticsearch:9200/$index"
fi
fi
done
restartPolicy: OnFailure
Apply retention policies
Deploy the automated cleanup job to maintain optimal Elasticsearch performance.
kubectl apply -f index-policy.yaml
Security and production hardening
Enable TLS for Jaeger components
Secure communication between Jaeger components with TLS encryption.
apiVersion: v1
kind: Secret
metadata:
name: jaeger-tls
namespace: observability
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTi... # base64 encoded certificate
tls.key: LS0tLS1CRUdJTi... # base64 encoded private key
---
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-production
namespace: observability
spec:
strategy: production
collector:
options:
collector:
grpc-tls:
enabled: true
cert: /tls/tls.crt
key: /tls/tls.key
volumeMounts:
- name: tls-volume
mountPath: /tls
readOnly: true
volumes:
- name: tls-volume
secret:
secretName: jaeger-tls
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| No traces appearing in Jaeger UI | Istio sidecar not injected | Verify namespace has istio-injection=enabled label |
| Elasticsearch connection refused | Service not ready or network policy | Check kubectl get svc -n observability elasticsearch |
| Jaeger collector pods crashing | Insufficient memory allocation | Increase memory limits in Jaeger CR to 2Gi |
| Traces missing spans from services | Sampling rate too low | Increase randomSamplingPercentage in Telemetry config |
| High storage consumption | No index cleanup configured | Enable esIndexCleaner in Jaeger configuration |
Next steps
- Configure Jaeger with NGINX reverse proxy and SSL termination
- Configure Kubernetes OpenTelemetry auto-instrumentation for microservices observability
- Configure Istio security policies with mutual TLS and authorization for Kubernetes service mesh
- Implement Jaeger security with TLS encryption and authentication for distributed tracing
- Set up Jaeger multi-datacenter replication for disaster recovery and high availability
Running this in production?
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Jaeger + Istio + Kubernetes Distributed Tracing Install Script
# Production-quality installer for distributed tracing with Jaeger
# Colors for output
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly BLUE='\033[0;34m'
readonly NC='\033[0m' # No Color
# Configuration
readonly SCRIPT_NAME=$(basename "$0")
readonly JAEGER_OPERATOR_VERSION="v1.51.0"
readonly ELASTICSEARCH_VERSION="8.11.0"
readonly NAMESPACE="observability"
readonly DOMAIN="${1:-jaeger.local}"
# Print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_step() {
echo -e "${BLUE}[$1]${NC} $2"
}
# Usage message
usage() {
cat << EOF
Usage: $SCRIPT_NAME [DOMAIN]
Install Jaeger with Istio service mesh for distributed tracing on Kubernetes.
Arguments:
DOMAIN Domain for Jaeger UI (default: jaeger.local)
Examples:
$SCRIPT_NAME
$SCRIPT_NAME jaeger.example.com
Requirements:
- Kubernetes cluster (v1.24+)
- Istio service mesh installed
- kubectl with cluster-admin privileges
- At least 4GB available memory
EOF
}
# Cleanup function
cleanup() {
local exit_code=$?
if [ $exit_code -ne 0 ]; then
print_error "Installation failed. Cleaning up..."
kubectl delete namespace "$NAMESPACE" --ignore-not-found=true 2>/dev/null || true
rm -f /tmp/elasticsearch-deployment.yaml /tmp/jaeger-instance.yaml 2>/dev/null || true
fi
exit $exit_code
}
trap cleanup ERR EXIT
# Auto-detect distro and set package manager
detect_distro() {
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_INSTALL="apt install -y"
PKG_UPDATE="apt update"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_INSTALL="dnf install -y"
PKG_UPDATE="dnf check-update || true"
;;
amzn)
PKG_MGR="yum"
PKG_INSTALL="yum install -y"
PKG_UPDATE="yum update -y --security"
;;
*)
print_error "Unsupported distribution: $ID"
exit 1
;;
esac
else
print_error "Cannot detect OS distribution"
exit 1
fi
export PKG_MGR PKG_INSTALL PKG_UPDATE
}
# Check if running as root or with sudo
check_privileges() {
if [ "$EUID" -eq 0 ]; then
print_warning "Running as root. Consider using sudo instead."
SUDO_CMD=""
elif command -v sudo >/dev/null 2>&1; then
SUDO_CMD="sudo"
else
print_error "This script requires root privileges or sudo"
exit 1
fi
}
# Validate arguments
validate_args() {
if [ "$#" -gt 1 ]; then
print_error "Too many arguments"
usage
exit 1
fi
if [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
usage
exit 0
fi
}
# Check prerequisites
check_prerequisites() {
print_step "1/8" "Checking prerequisites..."
# Check kubectl
if ! command -v kubectl >/dev/null 2>&1; then
print_error "kubectl is not installed"
exit 1
fi
# Check cluster connectivity
if ! kubectl cluster-info >/dev/null 2>&1; then
print_error "Cannot connect to Kubernetes cluster"
exit 1
fi
# Check cluster version
local k8s_version
k8s_version=$(kubectl version --client -o yaml 2>/dev/null | grep gitVersion | head -1 | awk '{print $2}' | tr -d '"v')
local major_version
major_version=$(echo "$k8s_version" | cut -d. -f1)
local minor_version
minor_version=$(echo "$k8s_version" | cut -d. -f2)
if [ "$major_version" -lt 1 ] || [ "$major_version" -eq 1 ] && [ "$minor_version" -lt 24 ]; then
print_error "Kubernetes version 1.24+ required, found: $k8s_version"
exit 1
fi
# Check Istio installation
if ! kubectl get namespace istio-system >/dev/null 2>&1; then
print_error "Istio not found. Please install Istio service mesh first"
exit 1
fi
print_status "Prerequisites validated successfully"
}
# Create namespace
create_namespace() {
print_step "2/8" "Creating observability namespace..."
kubectl create namespace "$NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -
print_status "Namespace '$NAMESPACE' ready"
}
# Install Jaeger operator
install_jaeger_operator() {
print_step "3/8" "Installing Jaeger operator..."
kubectl apply -f "https://github.com/jaegertracing/jaeger-operator/releases/download/${JAEGER_OPERATOR_VERSION}/jaeger-operator.yaml" -n "$NAMESPACE"
# Wait for operator to be ready
print_status "Waiting for Jaeger operator to be ready..."
kubectl wait --for=condition=available deployment/jaeger-operator -n "$NAMESPACE" --timeout=300s
print_status "Jaeger operator installed successfully"
}
# Deploy Elasticsearch
deploy_elasticsearch() {
print_step "4/8" "Deploying Elasticsearch for trace storage..."
cat > /tmp/elasticsearch-deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:$ELASTICSEARCH_VERSION
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
- name: xpack.security.enabled
value: "false"
resources:
requests:
memory: 2Gi
cpu: 500m
limits:
memory: 4Gi
cpu: 1000m
volumeMounts:
- name: elasticsearch-storage
mountPath: /usr/share/elasticsearch/data
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
volumes:
- name: elasticsearch-storage
emptyDir: {}
securityContext:
fsGroup: 1000
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: $NAMESPACE
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: 9200
name: http
- port: 9300
targetPort: 9300
name: transport
EOF
chmod 644 /tmp/elasticsearch-deployment.yaml
kubectl apply -f /tmp/elasticsearch-deployment.yaml
print_status "Waiting for Elasticsearch to be ready..."
kubectl wait --for=condition=available deployment/elasticsearch -n "$NAMESPACE" --timeout=300s
print_status "Elasticsearch deployed successfully"
}
# Create Jaeger instance
create_jaeger_instance() {
print_step "5/8" "Creating Jaeger production instance..."
cat > /tmp/jaeger-instance.yaml << EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger-production
namespace: $NAMESPACE
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
index-prefix: jaeger
username: ""
password: ""
tls:
enabled: false
esIndexCleaner:
enabled: true
numberOfDays: 7
schedule: "55 23 * * *"
collector:
replicas: 2
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 500m
query:
replicas: 2
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
cpu: 200m
ingress:
enabled: false
EOF
chmod 644 /tmp/jaeger-instance.yaml
kubectl apply -f /tmp/jaeger-instance.yaml
print_status "Waiting for Jaeger components to be ready..."
sleep 30
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=jaeger -n "$NAMESPACE" --timeout=300s
print_status "Jaeger instance created successfully"
}
# Configure Istio telemetry
configure_istio_telemetry() {
print_step "6/8" "Configuring Istio telemetry for Jaeger..."
kubectl patch configmap istio -n istio-system --type merge -p='{"data":{"mesh":"defaultProviders:\n tracing:\n - jaeger\ndefaultConfig:\n proxyStatsMatcher:\n inclusionRegexps:\n - \".*outlier_detection.*\"\n - \".*circuit_breakers.*\"\n - \".*upstream_rq_retry.*\"\n - \".*_cx_.*\"\n tracing:\n zipkin:\n address: jaeger-production-collector.observability:9411\nproxyMetadata:\n PILOT_ENABLE_WORKLOAD_ENTRY_AUTOREGISTRATION: true"}}'
# Restart Istio components to pick up new configuration
kubectl rollout restart deployment/istiod -n istio-system
kubectl rollout status deployment/istiod -n istio-system --timeout=300s
print_status "Istio telemetry configured for Jaeger"
}
# Create service and ingress
create_access() {
print_step "7/8" "Setting up Jaeger UI access..."
# Create NodePort service for Jaeger UI
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: jaeger-ui-nodeport
namespace: $NAMESPACE
spec:
type: NodePort
selector:
app.kubernetes.io/name: jaeger-production-query
ports:
- port: 16686
targetPort: 16686
nodePort: 30686
protocol: TCP
EOF
print_status "Jaeger UI accessible via NodePort 30686"
}
# Verify installation
verify_installation() {
print_step "8/8" "Verifying installation..."
# Check all pods are running
local failed_pods
failed_pods=$(kubectl get pods -n "$NAMESPACE" --field-selector=status.phase!=Running --no-headers | wc -l)
if [ "$failed_pods" -gt 0 ]; then
print_warning "Some pods are not running:"
kubectl get pods -n "$NAMESPACE" --field-selector=status.phase!=Running
fi
# Check Jaeger collector service
if kubectl get service jaeger-production-collector -n "$NAMESPACE" >/dev/null 2>&1; then
print_status "✓ Jaeger collector service ready"
else
print_error "✗ Jaeger collector service not found"
return 1
fi
# Check Jaeger query service
if kubectl get service jaeger-production-query -n "$NAMESPACE" >/dev/null 2>&1; then
print_status "✓ Jaeger query service ready"
else
print_error "✗ Jaeger query service not found"
return 1
fi
print_status "Installation completed successfully!"
echo
print_status "Access Jaeger UI:"
echo " - NodePort: http://<node-ip>:30686"
echo " - Port-forward: kubectl port-forward -n $NAMESPACE service/jaeger-production-query 16686:16686"
echo
print_status "To enable tracing for a namespace, run:"
echo " kubectl label namespace <namespace> istio-injection=enabled"
}
# Main execution
main() {
if [ "$#" -gt 0 ]; then
validate_args "$@"
fi
detect_distro
check_privileges
check_prerequisites
create_namespace
install_jaeger_operator
deploy_elasticsearch
create_jaeger_instance
configure_istio_telemetry
create_access
verify_installation
# Cleanup temp files
rm -f /tmp/elasticsearch-deployment.yaml /tmp/jaeger-instance.yaml
}
main "$@"
Review the script before running. Execute with: bash install.sh