Set up a comprehensive observability stack by integrating OpenTelemetry Collector with Elasticsearch, Logstash, and Kibana for distributed tracing, metrics collection, and unified monitoring across microservices and applications.
Prerequisites
- Root or sudo access
- 8GB RAM minimum
- 20GB disk space
- Network connectivity
What this solves
This tutorial sets up a unified observability stack by integrating OpenTelemetry with the ELK stack (Elasticsearch, Logstash, and Kibana). You'll configure OpenTelemetry Collector to receive traces and metrics from applications, process them through Logstash, store them in Elasticsearch, and visualize them in Kibana dashboards. This setup provides comprehensive monitoring for distributed systems, microservices, and application performance analysis.
Step-by-step installation
Install Elasticsearch 8
Install Elasticsearch to store OpenTelemetry trace data and metrics. We'll configure it for optimal trace storage performance.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update
sudo apt install -y elasticsearch
Configure Elasticsearch for OpenTelemetry
Configure Elasticsearch with settings optimized for trace data storage and indexing patterns used by OpenTelemetry.
cluster.name: otel-observability
node.name: otel-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["otel-node-1"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
indices.memory.index_buffer_size: 20%
thread_pool.write.queue_size: 1000
bootstrap.memory_lock: true
Configure Elasticsearch memory settings
Set JVM heap size for Elasticsearch to half of available RAM for optimal performance with trace data.
-Xms2g
-Xmx2g
sudo systemctl edit elasticsearch
[Service]
LimitMEMLOCK=infinity
Start Elasticsearch and configure authentication
Start Elasticsearch and set up authentication for secure access from OpenTelemetry components.
sudo systemctl enable --now elasticsearch
sudo systemctl status elasticsearch
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
sudo /usr/share/elasticsearch/bin/elasticsearch-users useradd otel_writer -p otel_password123 -r superuser
Install Logstash 8
Install Logstash to process and transform OpenTelemetry data before storing it in Elasticsearch.
sudo apt install -y logstash
Configure Logstash for OpenTelemetry data processing
Create Logstash configuration to receive OpenTelemetry data via HTTP and process it for Elasticsearch storage.
input {
http {
port => 8080
codec => json
additional_codecs => {
"application/x-protobuf" => "plain"
}
}
beats {
port => 5044
}
}
filter {
if [resourceSpans] {
# Process OpenTelemetry trace data
ruby {
code => "
spans = event.get('resourceSpans')
if spans
spans.each do |resource_span|
resource = resource_span['resource']['attributes'] || {}
scope_spans = resource_span['scopeSpans'] || []
scope_spans.each do |scope_span|
spans_array = scope_span['spans'] || []
spans_array.each do |span|
new_event = LogStash::Event.new()
new_event.set('[@timestamp]', Time.at(span['startTimeUnixNano'].to_i / 1000000000))
new_event.set('trace_id', span['traceId'])
new_event.set('span_id', span['spanId'])
new_event.set('parent_span_id', span['parentSpanId'])
new_event.set('operation_name', span['name'])
new_event.set('duration', span['endTimeUnixNano'].to_i - span['startTimeUnixNano'].to_i)
new_event.set('service_name', resource['service.name'])
new_event.set('service_version', resource['service.version'])
new_event.set('span_kind', span['kind'])
new_event.set('status_code', span.dig('status', 'code'))
new_event.set('attributes', span['attributes'])
new_event.set('events', span['events'])
new_event.tag('opentelemetry-span')
yield new_event
end
end
end
event.cancel
end
"
}
}
if [resourceMetrics] {
# Process OpenTelemetry metrics data
ruby {
code => "
metrics = event.get('resourceMetrics')
if metrics
metrics.each do |resource_metric|
resource = resource_metric['resource']['attributes'] || {}
scope_metrics = resource_metric['scopeMetrics'] || []
scope_metrics.each do |scope_metric|
metrics_array = scope_metric['metrics'] || []
metrics_array.each do |metric|
new_event = LogStash::Event.new()
new_event.set('[@timestamp]', Time.now)
new_event.set('metric_name', metric['name'])
new_event.set('metric_description', metric['description'])
new_event.set('metric_unit', metric['unit'])
new_event.set('service_name', resource['service.name'])
new_event.set('service_version', resource['service.version'])
new_event.set('metric_data', metric)
new_event.tag('opentelemetry-metric')
yield new_event
end
end
end
event.cancel
end
"
}
}
# Add common fields
mutate {
add_field => { "data_type" => "observability" }
}
# Parse and enhance trace data
if "opentelemetry-span" in [tags] {
mutate {
add_field => { "index_pattern" => "otel-spans" }
}
# Convert duration from nanoseconds to milliseconds
ruby {
code => "
duration_ns = event.get('duration')
if duration_ns
event.set('duration_ms', duration_ns.to_f / 1000000)
end
"
}
}
if "opentelemetry-metric" in [tags] {
mutate {
add_field => { "index_pattern" => "otel-metrics" }
}
}
}
output {
if "opentelemetry-span" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
user => "otel_writer"
password => "otel_password123"
index => "otel-spans-%{+YYYY.MM.dd}"
template_name => "otel-spans"
template_pattern => "otel-spans-*"
template => {
"index_patterns" => ["otel-spans-*"]
"settings" => {
"number_of_shards" => 1
"number_of_replicas" => 0
"refresh_interval" => "5s"
}
"mappings" => {
"properties" => {
"@timestamp" => { "type" => "date" }
"trace_id" => { "type" => "keyword" }
"span_id" => { "type" => "keyword" }
"parent_span_id" => { "type" => "keyword" }
"operation_name" => { "type" => "text", "fields" => { "keyword" => { "type" => "keyword" } } }
"service_name" => { "type" => "keyword" }
"service_version" => { "type" => "keyword" }
"duration" => { "type" => "long" }
"duration_ms" => { "type" => "float" }
"span_kind" => { "type" => "keyword" }
"status_code" => { "type" => "keyword" }
}
}
}
}
}
if "opentelemetry-metric" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
user => "otel_writer"
password => "otel_password123"
index => "otel-metrics-%{+YYYY.MM.dd}"
template_name => "otel-metrics"
template_pattern => "otel-metrics-*"
template => {
"index_patterns" => ["otel-metrics-*"]
"settings" => {
"number_of_shards" => 1
"number_of_replicas" => 0
"refresh_interval" => "5s"
}
"mappings" => {
"properties" => {
"@timestamp" => { "type" => "date" }
"metric_name" => { "type" => "keyword" }
"service_name" => { "type" => "keyword" }
"service_version" => { "type" => "keyword" }
}
}
}
}
}
stdout {
codec => rubydebug
}
}
Install Kibana 8
Install Kibana for visualizing OpenTelemetry data and creating observability dashboards.
sudo apt install -y kibana
Configure Kibana for observability
Configure Kibana to connect to Elasticsearch and enable features for distributed tracing visualization.
server.port: 5601
server.host: "0.0.0.0"
server.publicBaseUrl: "http://203.0.113.10:5601"
elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.username: "otel_writer"
elasticsearch.password: "otel_password123"
logging.appenders.file.type: file
logging.appenders.file.fileName: /var/log/kibana/kibana.log
logging.appenders.file.layout.type: json
logging.root.appenders: [default, file]
logging.root.level: info
telemetry.enabled: false
data.search.aggs.shardDelay.enabled: true
xpack.apm.enabled: true
xpack.observability.enabled: true
Install OpenTelemetry Collector
Download and install the OpenTelemetry Collector to receive telemetry data from applications.
wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.91.0/otelcol-contrib_0.91.0_linux_amd64.tar.gz
tar -xzf otelcol-contrib_0.91.0_linux_amd64.tar.gz
sudo mv otelcol-contrib /usr/local/bin/
sudo chmod +x /usr/local/bin/otelcol-contrib
sudo mkdir -p /etc/otelcol
sudo mkdir -p /var/log/otelcol
sudo useradd --system --no-create-home --shell /bin/false otelcol
sudo chown -R otelcol:otelcol /var/log/otelcol
Configure OpenTelemetry Collector
Create comprehensive collector configuration to receive traces and metrics from applications and export to Logstash.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_binary:
endpoint: 0.0.0.0:6832
zipkin:
endpoint: 0.0.0.0:9411
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 30s
static_configs:
- targets: ['localhost:8888']
hostmetrics:
collection_interval: 30s
scrapers:
cpu:
memory:
load:
disk:
filesystem:
network:
process:
processors:
batch:
send_batch_size: 1024
send_batch_max_size: 2048
timeout: 5s
memory_limiter:
limit_mib: 512
spike_limit_mib: 128
check_interval: 5s
resource:
attributes:
- key: deployment.environment
value: production
action: upsert
- key: collector.version
value: "0.91.0"
action: upsert
attributes:
actions:
- key: http.user_agent
action: delete
- key: http.request.header.authorization
action: delete
span:
name:
to_attributes:
rules:
- ^/api/v1/(?P.?)/(?P. ?)$
from_attributes: ["http.method", "http.route"]
separator: " "
exporters:
logging:
loglevel: info
elasticsearch:
endpoints: ["http://localhost:9200"]
user: otel_writer
password: otel_password123
traces_index: otel-spans
metrics_index: otel-metrics
logs_index: otel-logs
pipeline: otel-pipeline
timeout: 30s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
otlphttp/logstash:
endpoint: "http://localhost:8080"
timeout: 30s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
headers:
"Content-Type": "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
const_labels:
environment: production
collector: otelcol-contrib
service:
telemetry:
logs:
level: info
development: false
encoding: json
output_paths: ["/var/log/otelcol/otelcol.log"]
error_output_paths: ["/var/log/otelcol/otelcol-error.log"]
metrics:
address: 0.0.0.0:8888
level: detailed
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [otlp, jaeger, zipkin]
processors: [memory_limiter, resource, attributes, span, batch]
exporters: [elasticsearch, logging]
metrics:
receivers: [otlp, prometheus, hostmetrics]
processors: [memory_limiter, resource, batch]
exporters: [elasticsearch, prometheus, logging]
logs:
receivers: [otlp]
processors: [memory_limiter, resource, batch]
exporters: [elasticsearch, logging]
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
zpages:
endpoint: 0.0.0.0:55679
Create systemd services for OpenTelemetry Collector
Create systemd service files to manage the OpenTelemetry Collector as a system service.
[Unit]
Description=OpenTelemetry Collector
Documentation=https://opentelemetry.io/docs/collector/
After=network.target
Wants=network-online.target
[Service]
Type=exec
User=otelcol
Group=otelcol
ExecStart=/usr/local/bin/otelcol-contrib --config=/etc/otelcol/config.yaml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGTERM
TimeoutStopSec=30
Restart=on-failure
RestartSec=5
NotifyAccess=none
LimitNOFILE=65536
StandardOutput=journal
StandardError=journal
SyslogIdentifier=otelcol
[Install]
WantedBy=multi-user.target
Start all services
Enable and start all components of the observability stack in the correct order.
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch
sudo systemctl enable --now logstash
sudo systemctl enable --now kibana
sudo systemctl enable --now otelcol
sudo systemctl status elasticsearch logstash kibana otelcol
Configure firewall rules
Open necessary ports for OpenTelemetry data ingestion and dashboard access.
sudo ufw allow 5601/tcp comment "Kibana"
sudo ufw allow 4317/tcp comment "OpenTelemetry gRPC"
sudo ufw allow 4318/tcp comment "OpenTelemetry HTTP"
sudo ufw allow 14268/tcp comment "Jaeger HTTP"
sudo ufw allow 9411/tcp comment "Zipkin"
sudo ufw allow 8888/tcp comment "OTel metrics"
sudo ufw allow 8889/tcp comment "Prometheus export"
sudo ufw reload
Create Kibana index patterns and dashboards
Set up index patterns in Kibana to visualize OpenTelemetry data and create observability dashboards.
curl -X POST "localhost:5601/api/saved_objects/index-pattern/otel-spans" \
-H "Content-Type: application/json" \
-H "kbn-xsrf: true" \
-u "otel_writer:otel_password123" \
-d '{
"attributes": {
"title": "otel-spans-*",
"timeFieldName": "@timestamp"
}
}'
curl -X POST "localhost:5601/api/saved_objects/index-pattern/otel-metrics" \
-H "Content-Type: application/json" \
-H "kbn-xsrf: true" \
-u "otel_writer:otel_password123" \
-d '{
"attributes": {
"title": "otel-metrics-*",
"timeFieldName": "@timestamp"
}
}'
Configure application instrumentation
Example Node.js application instrumentation
Configure a sample Node.js application to send traces to your OpenTelemetry Collector.
{
"name": "otel-demo-app",
"version": "1.0.0",
"dependencies": {
"@opentelemetry/api": "^1.7.0",
"@opentelemetry/sdk-node": "^0.45.0",
"@opentelemetry/instrumentation": "^0.45.0",
"@opentelemetry/exporter-otlp-http": "^0.45.0",
"@opentelemetry/resources": "^1.18.0",
"@opentelemetry/semantic-conventions": "^1.18.0",
"express": "^4.18.0"
}
}
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const express = require('express');
// Initialize OpenTelemetry SDK
const sdk = new NodeSDK({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'demo-service',
[SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
[SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: 'production',
}),
traceExporter: new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces',
}),
});
sdk.start();
const app = express();
const port = 3000;
app.get('/', (req, res) => {
res.json({ message: 'Hello from instrumented app!', timestamp: new Date().toISOString() });
});
app.get('/api/users/:id', async (req, res) => {
const userId = req.params.id;
// Simulate some work
await new Promise(resolve => setTimeout(resolve, Math.random() * 100));
res.json({ userId, name: User ${userId}, timestamp: new Date().toISOString() });
});
app.listen(port, () => {
console.log(Demo app listening at http://localhost:${port});
});
Example Python application instrumentation
Configure a sample Python Flask application with OpenTelemetry instrumentation.
flask==2.3.3
opentelemetry-distro==0.42b0
opentelemetry-exporter-otlp==1.21.0
opentelemetry-instrumentation-flask==0.42b0
opentelemetry-instrumentation-requests==0.42b0
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
from opentelemetry.semconv.resource import ResourceAttributes
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from flask import Flask, jsonify
import time
import random
Configure OpenTelemetry
resource = Resource.create({
ResourceAttributes.SERVICE_NAME: "python-demo-service",
ResourceAttributes.SERVICE_VERSION: "1.0.0",
ResourceAttributes.DEPLOYMENT_ENVIRONMENT: "production",
})
trace.set_tracer_provider(TracerProvider(resource=resource))
tracer = trace.get_tracer(__name__)
Configure OTLP exporter
otlp_exporter = OTLPSpanExporter(
endpoint="http://localhost:4318/v1/traces",
headers={}
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
Create Flask app
app = Flask(__name__)
FlaskInstrumentor().instrument_app(app)
@app.route('/')
def hello():
return jsonify({
"message": "Hello from Python instrumented app!",
"timestamp": time.time()
})
@app.route('/api/process/')
def process_task(task_id):
with tracer.start_as_current_span("process_task") as span:
span.set_attribute("task.id", task_id)
span.set_attribute("task.type", "background_job")
# Simulate processing
processing_time = random.uniform(0.1, 0.5)
time.sleep(processing_time)
span.set_attribute("task.duration_seconds", processing_time)
span.set_attribute("task.status", "completed")
return jsonify({
"task_id": task_id,
"status": "completed",
"processing_time": processing_time
})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
Verify your setup
# Check all services are running
sudo systemctl status elasticsearch logstash kibana otelcol
Test OpenTelemetry Collector endpoints
curl -f http://localhost:13133/
curl -f http://localhost:8888/metrics
Check Elasticsearch indices
curl -u "otel_writer:otel_password123" "http://localhost:9200/_cat/indices/otel-*?v"
Test trace ingestion
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": {"stringValue": "test-service"}
}]
},
"scopeSpans": [{
"spans": [{
"traceId": "5b8aa5a2d2c872e8321cf37308d69df2",
"spanId": "051581bf3cb55c13",
"name": "test-span",
"startTimeUnixNano": "1699000000000000000",
"endTimeUnixNano": "1699000001000000000",
"kind": 1
}]
}]
}]
}'
Access Kibana
echo "Access Kibana at: http://203.0.113.10:5601"
echo "Username: otel_writer"
echo "Password: otel_password123"
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| OpenTelemetry Collector won't start | Configuration syntax error | sudo /usr/local/bin/otelcol-contrib --config=/etc/otelcol/config.yaml --dry-run |
| No traces appearing in Kibana | Elasticsearch authentication failure | Check credentials in collector config and test: curl -u "otel_writer:otel_password123" "http://localhost:9200/_cluster/health" |
| Logstash processing errors | Ruby filter script issues | Check Logstash logs: sudo tail -f /var/log/logstash/logstash-plain.log |
| High memory usage | Insufficient memory limits | Adjust memory_limiter in collector config and Elasticsearch heap size |
| Connection refused errors | Firewall blocking ports | Verify firewall rules: sudo ufw status or sudo firewall-cmd --list-ports |
| Index template conflicts | Existing templates | Delete and recreate: curl -X DELETE -u "otel_writer:otel_password123" "http://localhost:9200/_template/otel-*" |
Next steps
- Configure OpenTelemetry sampling strategies for high-traffic applications
- Set up OpenTelemetry metrics collection with Prometheus integration for distributed system monitoring
- Set up Jaeger high availability clustering with load balancing and failover
- Configure Kibana 8 advanced security with field-level restrictions and role-based access control
- Implement custom OpenTelemetry metrics exporters for application-specific monitoring
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Default configuration
ELASTIC_PASSWORD=""
KIBANA_PASSWORD=""
# Usage function
usage() {
echo "Usage: $0"
echo "Sets up OpenTelemetry integration with ELK stack for unified observability"
exit 1
}
# Cleanup on failure
cleanup() {
echo -e "${RED}Installation failed. Cleaning up...${NC}"
systemctl stop elasticsearch logstash kibana otel-collector 2>/dev/null || true
exit 1
}
trap cleanup ERR
# Check prerequisites
check_prerequisites() {
if [[ $EUID -ne 0 ]]; then
echo -e "${RED}This script must be run as root${NC}"
exit 1
fi
if ! command -v curl &> /dev/null; then
echo -e "${RED}curl is required but not installed${NC}"
exit 1
fi
}
# Detect distribution
detect_distro() {
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian) PKG_MGR="apt"; PKG_INSTALL="apt install -y"; PKG_UPDATE="apt update" ;;
almalinux|rocky|centos|rhel|ol|fedora) PKG_MGR="dnf"; PKG_INSTALL="dnf install -y"; PKG_UPDATE="dnf update -y" ;;
amzn) PKG_MGR="yum"; PKG_INSTALL="yum install -y"; PKG_UPDATE="yum update -y" ;;
*) echo -e "${RED}Unsupported distro: $ID${NC}"; exit 1 ;;
esac
else
echo -e "${RED}Cannot detect distribution${NC}"
exit 1
fi
}
# Install Elasticsearch
install_elasticsearch() {
echo -e "${GREEN}[1/5] Installing Elasticsearch...${NC}"
if [[ "$PKG_MGR" == "apt" ]]; then
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list
$PKG_UPDATE
$PKG_INSTALL elasticsearch
else
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
cat > /etc/yum.repos.d/elasticsearch.repo << 'EOF'
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
EOF
$PKG_INSTALL --enablerepo=elasticsearch elasticsearch
fi
# Configure Elasticsearch
cat > /etc/elasticsearch/elasticsearch.yml << 'EOF'
cluster.name: otel-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
EOF
chown root:elasticsearch /etc/elasticsearch/elasticsearch.yml
chmod 644 /etc/elasticsearch/elasticsearch.yml
systemctl enable elasticsearch
systemctl start elasticsearch
sleep 30
# Set elastic password
ELASTIC_PASSWORD=$(echo "y" | /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -b 2>/dev/null | grep "New value:" | awk '{print $3}')
}
# Install Logstash
install_logstash() {
echo -e "${GREEN}[2/5] Installing Logstash...${NC}"
$PKG_INSTALL logstash
# Configure Logstash pipeline
cat > /etc/logstash/conf.d/otel-pipeline.conf << 'EOF'
input {
http {
host => "0.0.0.0"
port => 8080
codec => json
additional_codecs => {
"application/x-protobuf" => "plain"
}
}
beats {
port => 5044
}
}
filter {
if [resourceSpans] {
ruby {
code => '
spans = event.get("resourceSpans")
if spans
spans.each do |resource_span|
resource = resource_span["resource"]["attributes"] || {}
scope_spans = resource_span["scopeSpans"] || []
scope_spans.each do |scope_span|
spans_array = scope_span["spans"] || []
spans_array.each do |span|
new_event = LogStash::Event.new()
new_event.set("[@timestamp]", Time.at(span["startTimeUnixNano"].to_i / 1000000000))
new_event.set("trace_id", span["traceId"])
new_event.set("span_id", span["spanId"])
new_event.set("operation_name", span["name"])
new_event.set("service_name", resource["service.name"])
new_event.tag("opentelemetry-span")
yield new_event
end
end
end
event.cancel
end
'
}
}
}
output {
if "opentelemetry-span" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "otel-traces-%{+YYYY.MM.dd}"
user => "elastic"
password => "ELASTIC_PASSWORD_PLACEHOLDER"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
index => "otel-metrics-%{+YYYY.MM.dd}"
user => "elastic"
password => "ELASTIC_PASSWORD_PLACEHOLDER"
}
}
}
EOF
sed -i "s/ELASTIC_PASSWORD_PLACEHOLDER/$ELASTIC_PASSWORD/g" /etc/logstash/conf.d/otel-pipeline.conf
chown root:logstash /etc/logstash/conf.d/otel-pipeline.conf
chmod 644 /etc/logstash/conf.d/otel-pipeline.conf
systemctl enable logstash
systemctl start logstash
}
# Install Kibana
install_kibana() {
echo -e "${GREEN}[3/5] Installing Kibana...${NC}"
$PKG_INSTALL kibana
# Create kibana_system user password
KIBANA_PASSWORD=$(openssl rand -base64 32)
curl -u elastic:$ELASTIC_PASSWORD -X POST "localhost:9200/_security/user/kibana_system/_password" \
-H "Content-Type: application/json" \
-d "{\"password\":\"$KIBANA_PASSWORD\"}"
# Configure Kibana
cat > /etc/kibana/kibana.yml << EOF
server.port: 5601
server.host: "0.0.0.0"
server.name: "otel-kibana"
elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "$KIBANA_PASSWORD"
xpack.security.encryptionKey: "$(openssl rand -base64 32)"
xpack.encryptedSavedObjects.encryptionKey: "$(openssl rand -base64 32)"
xpack.reporting.encryptionKey: "$(openssl rand -base64 32)"
EOF
chown root:kibana /etc/kibana/kibana.yml
chmod 644 /etc/kibana/kibana.yml
systemctl enable kibana
systemctl start kibana
}
# Install OpenTelemetry Collector
install_otel_collector() {
echo -e "${GREEN}[4/5] Installing OpenTelemetry Collector...${NC}"
OTEL_VERSION="0.90.1"
if [[ "$PKG_MGR" == "apt" ]]; then
wget -O /tmp/otel-collector.deb "https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v${OTEL_VERSION}/otelcol_${OTEL_VERSION}_linux_amd64.deb"
dpkg -i /tmp/otel-collector.deb
else
wget -O /tmp/otel-collector.rpm "https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v${OTEL_VERSION}/otelcol-${OTEL_VERSION}-1.x86_64.rpm"
rpm -i /tmp/otel-collector.rpm
fi
# Create config directory
mkdir -p /etc/otelcol
# Configure OpenTelemetry Collector
cat > /etc/otelcol/config.yaml << 'EOF'
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
thrift_compact:
endpoint: 0.0.0.0:6831
processors:
batch:
timeout: 1s
send_batch_size: 1024
memory_limiter:
limit_mib: 512
exporters:
logging:
loglevel: debug
elasticsearch:
endpoints: ["http://localhost:9200"]
traces_index: "otel-traces"
metrics_index: "otel-metrics"
service:
pipelines:
traces:
receivers: [otlp, jaeger]
processors: [memory_limiter, batch]
exporters: [elasticsearch, logging]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [elasticsearch, logging]
EOF
chown root:root /etc/otelcol/config.yaml
chmod 644 /etc/otelcol/config.yaml
# Create systemd service
cat > /etc/systemd/system/otel-collector.service << 'EOF'
[Unit]
Description=OpenTelemetry Collector
After=network.target
[Service]
Type=simple
User=otel
Group=otel
ExecStart=/usr/local/bin/otelcol --config=/etc/otelcol/config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
useradd -r -s /bin/false otel 2>/dev/null || true
chown otel:otel /etc/otelcol/config.yaml
systemctl daemon-reload
systemctl enable otel-collector
systemctl start otel-collector
}
# Configure firewall
configure_firewall() {
echo -e "${GREEN}[5/5] Configuring firewall...${NC}"
if command -v ufw &> /dev/null; then
ufw allow 9200/tcp
ufw allow 5601/tcp
ufw allow 4317/tcp
ufw allow 4318/tcp
ufw allow 14250/tcp
ufw allow 14268/tcp
ufw --force enable
elif command -v firewall-cmd &> /dev/null; then
firewall-cmd --permanent --add-port=9200/tcp
firewall-cmd --permanent --add-port=5601/tcp
firewall-cmd --permanent --add-port=4317/tcp
firewall-cmd --permanent --add-port=4318/tcp
firewall-cmd --permanent --add-port=14250/tcp
firewall-cmd --permanent --add-port=14268/tcp
firewall-cmd --reload
fi
}
# Verify installation
verify_installation() {
echo -e "${GREEN}Verifying installation...${NC}"
sleep 30
if curl -s http://localhost:9200 > /dev/null; then
echo -e "${GREEN}✓ Elasticsearch is running${NC}"
else
echo -e "${RED}✗ Elasticsearch is not responding${NC}"
fi
if curl -s http://localhost:5601 > /dev/null; then
echo -e "${GREEN}✓ Kibana is running${NC}"
else
echo -e "${RED}✗ Kibana is not responding${NC}"
fi
if systemctl is-active --quiet otel-collector; then
echo -e "${GREEN}✓ OpenTelemetry Collector is running${NC}"
else
echo -e "${RED}✗ OpenTelemetry Collector is not running${NC}"
fi
}
# Main execution
main() {
check_prerequisites
detect_distro
install_elasticsearch
install_logstash
install_kibana
install_otel_collector
configure_firewall
verify_installation
echo -e "${GREEN}Installation completed successfully!${NC}"
echo -e "${YELLOW}Elasticsearch user: elastic${NC}"
echo -e "${YELLOW}Elasticsearch password: $ELASTIC_PASSWORD${NC}"
echo -e "${YELLOW}Kibana URL: http://localhost:5601${NC}"
echo -e "${YELLOW}OTLP gRPC endpoint: localhost:4317${NC}"
echo -e "${YELLOW}OTLP HTTP endpoint: localhost:4318${NC}"
}
main "$@"
Review the script before running. Execute with: bash install.sh