Set up Grafana Loki with S3-compatible object storage for scalable log aggregation. Configure retention policies, schema management, and monitoring for production-ready centralized logging infrastructure.
Prerequisites
- S3-compatible storage bucket
- 4GB RAM minimum
- Basic YAML knowledge
- AWS CLI configured
What this solves
Grafana Loki with S3 storage provides scalable centralized logging without the complexity of traditional log aggregation systems. This setup separates compute from storage, allowing you to scale ingestion independently while storing logs cost-effectively in object storage like AWS S3 or MinIO.
Prerequisites
- Linux server with 4GB RAM and 20GB disk space
- S3-compatible storage (AWS S3, MinIO, or similar)
- Basic understanding of YAML configuration
Step-by-step installation
Update system packages
Start by updating your package manager to ensure you get the latest versions.
sudo apt update && sudo apt upgrade -y
sudo apt install -y wget curl unzip
Create Loki user and directories
Create a dedicated system user for Loki and set up the necessary directory structure with correct permissions.
sudo useradd --system --shell /bin/false --home-dir /var/lib/loki loki
sudo mkdir -p /etc/loki /var/lib/loki /var/log/loki
sudo chown -R loki:loki /var/lib/loki /var/log/loki
sudo chmod 755 /etc/loki /var/lib/loki /var/log/loki
Download and install Loki
Download the latest Loki binary and install it to the system path.
cd /tmp
wget https://github.com/grafana/loki/releases/download/v2.9.3/loki-linux-amd64.zip
unzip loki-linux-amd64.zip
sudo mv loki-linux-amd64 /usr/local/bin/loki
sudo chmod +x /usr/local/bin/loki
Configure S3 storage credentials
Create AWS credentials file for S3 access. Replace the values with your actual S3 credentials.
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
sudo mkdir -p /home/loki/.aws
sudo chown -R loki:loki /home/loki/.aws
sudo chmod 700 /home/loki/.aws
sudo chmod 600 /home/loki/.aws/credentials
Create Loki configuration with S3 backend
Configure Loki with S3 storage backend, including schema configuration and retention policies.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
ingester:
wal:
enabled: true
dir: /var/lib/loki/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
chunk_encoding: snappy
schema_config:
configs:
- from: 2023-01-01
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /var/lib/loki/index
shared_store: s3
cache_location: /var/lib/loki/boltdb-cache
aws:
s3: s3://your-loki-bucket
region: us-east-1
s3forcepathstyle: false
compactor:
working_directory: /var/lib/loki/compactor
shared_store: s3
compaction_interval: 5m
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
per_stream_rate_limit: 5MB
per_stream_rate_limit_burst: 20MB
table_manager:
retention_deletes_enabled: true
retention_period: 2160h # 90 days
chunk_store_config:
max_look_back_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 15m
cache_results: true
results_cache:
cache:
enable_fifocache: true
fifocache:
max_size_items: 1024
validity: 24h
ruler:
storage:
type: local
local:
directory: /var/lib/loki/rules
rule_path: /var/lib/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
Set configuration file permissions
Ensure the Loki configuration file has correct ownership and permissions for security.
sudo chown loki:loki /etc/loki/config.yml
sudo chmod 644 /etc/loki/config.yml
Create S3 bucket for Loki storage
Create the S3 bucket referenced in your configuration. Replace with your actual bucket name and region.
aws s3 mb s3://your-loki-bucket --region us-east-1
aws s3api put-bucket-versioning --bucket your-loki-bucket --versioning-configuration Status=Enabled
Create systemd service file
Create a systemd service to manage Loki as a system service with proper resource limits and restart policies.
[Unit]
Description=Loki log aggregation system
After=network.target
[Service]
Type=simple
User=loki
Group=loki
ExecStart=/usr/local/bin/loki -config.file=/etc/loki/config.yml
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=loki
KillMode=mixed
KillSignal=SIGTERM
Resource limits
LimitNOFILE=65536
LimitNPROC=4096
Security settings
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/loki /var/log/loki
Environment
Environment=HOME=/home/loki
[Install]
WantedBy=multi-user.target
Enable and start Loki service
Reload systemd configuration and start Loki service with automatic startup on boot.
sudo systemctl daemon-reload
sudo systemctl enable loki
sudo systemctl start loki
sudo systemctl status loki
Configure log rotation
Set up log rotation to prevent Loki logs from consuming too much disk space.
/var/log/loki/*.log {
daily
missingok
rotate 30
compress
delaycompress
notifempty
create 644 loki loki
postrotate
systemctl reload loki > /dev/null 2>&1 || true
endscript
}
Configure firewall rules
Open the necessary ports for Loki HTTP API and gRPC communication.
sudo ufw allow 3100/tcp comment 'Loki HTTP API'
sudo ufw allow 9096/tcp comment 'Loki gRPC'
sudo ufw reload
Configure retention and monitoring
Set up retention policies
Configure automatic cleanup of old log data to manage storage costs and compliance requirements.
# Retention configuration for different log streams
global:
retention_period: 2160h # 90 days default
stream_retention:
- selector: '{job="nginx"}'
priority: 1
period: 720h # 30 days for nginx logs
- selector: '{job="application", level="error"}'
priority: 2
period: 4320h # 180 days for error logs
- selector: '{job="security"}'
priority: 3
period: 8760h # 1 year for security logs
Create monitoring script
Set up a monitoring script to check Loki health and S3 connectivity.
#!/bin/bash
Loki health check script
LOKI_URL="http://localhost:3100"
LOG_FILE="/var/log/loki/health-check.log"
Check Loki API endpoint
if curl -s "$LOKI_URL/ready" > /dev/null; then
echo "$(date): Loki API healthy" >> "$LOG_FILE"
else
echo "$(date): Loki API unhealthy" >> "$LOG_FILE"
exit 1
fi
Check S3 connectivity
if aws s3 ls s3://your-loki-bucket > /dev/null 2>&1; then
echo "$(date): S3 connectivity OK" >> "$LOG_FILE"
else
echo "$(date): S3 connectivity failed" >> "$LOG_FILE"
exit 1
fi
Check disk space
DISK_USAGE=$(df /var/lib/loki | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$DISK_USAGE" -gt 85 ]; then
echo "$(date): WARNING: Disk usage at ${DISK_USAGE}%" >> "$LOG_FILE"
fi
echo "$(date): Health check completed successfully" >> "$LOG_FILE"
sudo chmod +x /usr/local/bin/loki-health-check.sh
sudo chown loki:loki /usr/local/bin/loki-health-check.sh
Schedule health checks
Add a cron job to run health checks every 5 minutes and alert on failures.
sudo crontab -u loki -e
Add this line to the crontab:
/5 * /usr/local/bin/loki-health-check.sh
Configure log shipping with Promtail
Install Promtail
Download and install Promtail to ship logs from your servers to Loki.
cd /tmp
wget https://github.com/grafana/loki/releases/download/v2.9.3/promtail-linux-amd64.zip
unzip promtail-linux-amd64.zip
sudo mv promtail-linux-amd64 /usr/local/bin/promtail
sudo chmod +x /usr/local/bin/promtail
Configure Promtail
Create Promtail configuration to collect and forward logs to Loki.
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/lib/loki/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*.log
- job_name: nginx
static_configs:
- targets:
- localhost
labels:
job: nginx
__path__: /var/log/nginx/*.log
- job_name: systemd
journal:
max_age: 12h
labels:
job: systemd-journal
relabel_configs:
- source_labels: ['__journal__systemd_unit']
target_label: 'unit'
Create Promtail systemd service
Set up Promtail as a systemd service for automatic log shipping.
[Unit]
Description=Promtail service
After=network.target
[Service]
Type=simple
User=loki
Group=loki
ExecStart=/usr/local/bin/promtail -config.file=/etc/loki/promtail.yml
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=promtail
[Install]
WantedBy=multi-user.target
sudo chown loki:loki /etc/loki/promtail.yml
sudo systemctl daemon-reload
sudo systemctl enable promtail
sudo systemctl start promtail
sudo systemctl status promtail
Verify your setup
Test Loki API connectivity and verify logs are being stored in S3.
# Check Loki service status
sudo systemctl status loki
Test Loki API endpoints
curl -s http://localhost:3100/ready
curl -s http://localhost:3100/metrics
Check if logs are being ingested
curl -G -s "http://localhost:3100/loki/api/v1/query" --data-urlencode 'query={job="varlogs"}' | jq
Verify S3 bucket contents
aws s3 ls s3://your-loki-bucket/loki/ --recursive
Check Promtail is shipping logs
sudo systemctl status promtail
curl -s http://localhost:9080/metrics
sudo apt install jq (Ubuntu/Debian) or sudo dnf install jq (AlmaLinux/Rocky/CentOS).Common issues
| Symptom | Cause | Fix |
|---|---|---|
| Loki won't start - permission denied | Incorrect file ownership or permissions | sudo chown -R loki:loki /var/lib/loki /etc/loki |
| S3 access denied errors | Invalid AWS credentials or bucket permissions | Verify credentials in /home/loki/.aws/credentials and S3 bucket policy |
| High memory usage | Ingestion rate too high or retention too long | Adjust ingestion_rate_mb and retention_period in config |
| Query timeouts | Large time range queries or missing indexes | Use smaller time ranges and ensure proper schema configuration |
| Promtail not shipping logs | File permissions or path configuration | Check __path__ patterns and ensure loki user can read log files |
| Compactor failures | S3 connectivity or insufficient permissions | Check S3 bucket policy allows list/delete operations |
Performance optimization
Tune Loki for high-volume logging
Optimize configuration for production workloads with high log volume.
# Add to your existing config.yml
ingester:
chunk_target_size: 1572864 # 1.5MB
chunk_encoding: snappy
max_chunk_age: 2h
limits_config:
ingestion_rate_mb: 50 # Increase for high volume
ingestion_burst_size_mb: 100 # Allow traffic spikes
max_streams_per_user: 0 # Unlimited streams
max_line_size: 256KB # Prevent oversized log lines
query_scheduler:
max_outstanding_requests_per_tenant: 256
frontend:
max_outstanding_per_tenant: 256
compress_responses: true
Configure S3 lifecycle policies
Set up S3 lifecycle rules to automatically transition older logs to cheaper storage tiers.
{
"Rules": [
{
"ID": "LokiLogLifecycle",
"Status": "Enabled",
"Filter": {
"Prefix": "loki/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555 # 7 years
}
}
]
}
aws s3api put-bucket-lifecycle-configuration --bucket your-loki-bucket --lifecycle-configuration file://s3-lifecycle-policy.json
Next steps
- Set up Grafana dashboards to visualize your Loki logs
- Configure alerting with Prometheus Alertmanager for log-based alerts
- Parse and analyze NGINX logs with custom Loki queries
- Configure multi-tenant Loki for organization-wide logging
- Set up Loki clustering for high availability deployments
Running this in production?
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Global variables
LOKI_VERSION="2.9.3"
LOKI_USER="loki"
LOKI_HOME="/var/lib/loki"
LOKI_CONFIG="/etc/loki"
LOKI_LOG="/var/log/loki"
# Usage function
usage() {
echo "Usage: $0 <s3_bucket> <aws_region> [aws_access_key_id] [aws_secret_access_key]"
echo "Example: $0 my-loki-bucket us-east-1 AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
echo "Note: If AWS credentials are not provided, you'll be prompted to enter them"
exit 1
}
# Error handling
cleanup() {
echo -e "${RED}[ERROR] Installation failed. Cleaning up...${NC}"
systemctl stop loki 2>/dev/null || true
systemctl disable loki 2>/dev/null || true
rm -f /etc/systemd/system/loki.service
rm -f /usr/local/bin/loki
userdel -r loki 2>/dev/null || true
echo -e "${YELLOW}[INFO] Cleanup completed${NC}"
}
trap cleanup ERR
# Check if running as root
check_root() {
if [[ $EUID -ne 0 ]]; then
echo -e "${RED}[ERROR] This script must be run as root${NC}"
exit 1
fi
}
# Detect distribution
detect_distro() {
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_UPDATE="apt update && apt upgrade -y"
PKG_INSTALL="apt install -y"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_UPDATE="dnf update -y"
PKG_INSTALL="dnf install -y"
;;
amzn)
PKG_MGR="yum"
PKG_UPDATE="yum update -y"
PKG_INSTALL="yum install -y"
;;
*)
echo -e "${RED}[ERROR] Unsupported distro: $ID${NC}"
exit 1
;;
esac
else
echo -e "${RED}[ERROR] Cannot detect distribution${NC}"
exit 1
fi
}
# Validate arguments
validate_args() {
if [[ $# -lt 2 ]]; then
usage
fi
S3_BUCKET="$1"
AWS_REGION="$2"
if [[ $# -ge 4 ]]; then
AWS_ACCESS_KEY="$3"
AWS_SECRET_KEY="$4"
else
echo -e "${YELLOW}[INPUT] Enter AWS Access Key ID:${NC}"
read -r AWS_ACCESS_KEY
echo -e "${YELLOW}[INPUT] Enter AWS Secret Access Key:${NC}"
read -rs AWS_SECRET_KEY
fi
}
# Update system packages
update_system() {
echo -e "${GREEN}[1/8] Updating system packages...${NC}"
$PKG_UPDATE
$PKG_INSTALL wget curl unzip
}
# Create Loki user and directories
create_user_dirs() {
echo -e "${GREEN}[2/8] Creating Loki user and directories...${NC}"
if ! id "$LOKI_USER" &>/dev/null; then
useradd --system --shell /bin/false --home-dir "$LOKI_HOME" --create-home "$LOKI_USER"
fi
mkdir -p "$LOKI_CONFIG" "$LOKI_HOME"/{wal,index,boltdb-cache,compactor,rules,rules-temp} "$LOKI_LOG"
chown -R "$LOKI_USER:$LOKI_USER" "$LOKI_HOME" "$LOKI_LOG"
chmod 755 "$LOKI_CONFIG" "$LOKI_HOME" "$LOKI_LOG"
}
# Download and install Loki
install_loki() {
echo -e "${GREEN}[3/8] Downloading and installing Loki v${LOKI_VERSION}...${NC}"
cd /tmp
wget -q "https://github.com/grafana/loki/releases/download/v${LOKI_VERSION}/loki-linux-amd64.zip"
unzip -q loki-linux-amd64.zip
mv loki-linux-amd64 /usr/local/bin/loki
chmod 755 /usr/local/bin/loki
rm -f loki-linux-amd64.zip
}
# Configure AWS credentials
configure_aws_credentials() {
echo -e "${GREEN}[4/8] Configuring AWS credentials...${NC}"
mkdir -p "$LOKI_HOME/.aws"
cat > "$LOKI_HOME/.aws/credentials" << EOF
[default]
aws_access_key_id = $AWS_ACCESS_KEY
aws_secret_access_key = $AWS_SECRET_KEY
region = $AWS_REGION
EOF
chown -R "$LOKI_USER:$LOKI_USER" "$LOKI_HOME/.aws"
chmod 700 "$LOKI_HOME/.aws"
chmod 600 "$LOKI_HOME/.aws/credentials"
}
# Create Loki configuration
create_config() {
echo -e "${GREEN}[5/8] Creating Loki configuration...${NC}"
cat > "$LOKI_CONFIG/config.yml" << EOF
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
ingester:
wal:
enabled: true
dir: $LOKI_HOME/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
chunk_encoding: snappy
schema_config:
configs:
- from: 2023-01-01
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: $LOKI_HOME/index
shared_store: s3
cache_location: $LOKI_HOME/boltdb-cache
aws:
s3: s3://$S3_BUCKET
region: $AWS_REGION
s3forcepathstyle: false
compactor:
working_directory: $LOKI_HOME/compactor
shared_store: s3
compaction_interval: 5m
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
per_stream_rate_limit: 5MB
per_stream_rate_limit_burst: 20MB
table_manager:
retention_deletes_enabled: true
retention_period: 2160h
chunk_store_config:
max_look_back_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 15m
cache_results: true
ruler:
storage:
type: local
local:
directory: $LOKI_HOME/rules
rule_path: $LOKI_HOME/rules-temp
ring:
kvstore:
store: inmemory
enable_api: true
EOF
chown "$LOKI_USER:$LOKI_USER" "$LOKI_CONFIG/config.yml"
chmod 644 "$LOKI_CONFIG/config.yml"
}
# Create systemd service
create_service() {
echo -e "${GREEN}[6/8] Creating systemd service...${NC}"
cat > /etc/systemd/system/loki.service << EOF
[Unit]
Description=Loki Log Aggregation System
Documentation=https://grafana.com/oss/loki/
After=network.target
[Service]
Type=simple
User=$LOKI_USER
Group=$LOKI_USER
ExecStart=/usr/local/bin/loki -config.file=$LOKI_CONFIG/config.yml
WorkingDirectory=$LOKI_HOME
Restart=always
RestartSec=10
LimitNOFILE=65536
Environment=HOME=$LOKI_HOME
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
# Configure firewall
configure_firewall() {
echo -e "${GREEN}[7/8] Configuring firewall...${NC}"
case "$ID" in
ubuntu|debian)
if command -v ufw >/dev/null 2>&1; then
ufw allow 3100/tcp comment "Loki HTTP"
ufw allow 9096/tcp comment "Loki gRPC"
fi
;;
*)
if command -v firewall-cmd >/dev/null 2>&1; then
firewall-cmd --permanent --add-port=3100/tcp
firewall-cmd --permanent --add-port=9096/tcp
firewall-cmd --reload
fi
;;
esac
}
# Start and verify Loki
start_and_verify() {
echo -e "${GREEN}[8/8] Starting and verifying Loki...${NC}"
systemctl enable loki
systemctl start loki
# Wait for Loki to start
sleep 5
# Verify service is running
if systemctl is-active --quiet loki; then
echo -e "${GREEN}[SUCCESS] Loki service is running${NC}"
else
echo -e "${RED}[ERROR] Loki service failed to start${NC}"
journalctl -u loki --no-pager -n 20
exit 1
fi
# Test HTTP endpoint
if curl -s http://localhost:3100/ready >/dev/null 2>&1; then
echo -e "${GREEN}[SUCCESS] Loki HTTP endpoint is responding${NC}"
else
echo -e "${YELLOW}[WARNING] Loki HTTP endpoint not yet ready${NC}"
fi
}
# Main function
main() {
check_root
detect_distro
validate_args "$@"
echo -e "${GREEN}Installing Loki with S3 backend on $PRETTY_NAME${NC}"
echo -e "${YELLOW}S3 Bucket: $S3_BUCKET${NC}"
echo -e "${YELLOW}AWS Region: $AWS_REGION${NC}"
update_system
create_user_dirs
install_loki
configure_aws_credentials
create_config
create_service
configure_firewall
start_and_verify
echo -e "${GREEN}[COMPLETE] Loki installation completed successfully!${NC}"
echo -e "${YELLOW}[INFO] Access Loki at: http://$(hostname -I | awk '{print $1}'):3100${NC}"
echo -e "${YELLOW}[INFO] Check status: systemctl status loki${NC}"
echo -e "${YELLOW}[INFO] View logs: journalctl -u loki -f${NC}"
trap - ERR
}
main "$@"
Review the script before running. Execute with: bash install.sh