Configure Loki with S3 storage backend for scalable centralized logging

Intermediate 45 min Apr 19, 2026 136 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up Grafana Loki with S3-compatible object storage for scalable log aggregation. Configure retention policies, schema management, and monitoring for production-ready centralized logging infrastructure.

Prerequisites

  • S3-compatible storage bucket
  • 4GB RAM minimum
  • Basic YAML knowledge
  • AWS CLI configured

What this solves

Grafana Loki with S3 storage provides scalable centralized logging without the complexity of traditional log aggregation systems. This setup separates compute from storage, allowing you to scale ingestion independently while storing logs cost-effectively in object storage like AWS S3 or MinIO.

Prerequisites

  • Linux server with 4GB RAM and 20GB disk space
  • S3-compatible storage (AWS S3, MinIO, or similar)
  • Basic understanding of YAML configuration

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you get the latest versions.

sudo apt update && sudo apt upgrade -y
sudo apt install -y wget curl unzip
sudo dnf update -y
sudo dnf install -y wget curl unzip

Create Loki user and directories

Create a dedicated system user for Loki and set up the necessary directory structure with correct permissions.

sudo useradd --system --shell /bin/false --home-dir /var/lib/loki loki
sudo mkdir -p /etc/loki /var/lib/loki /var/log/loki
sudo chown -R loki:loki /var/lib/loki /var/log/loki
sudo chmod 755 /etc/loki /var/lib/loki /var/log/loki

Download and install Loki

Download the latest Loki binary and install it to the system path.

cd /tmp
wget https://github.com/grafana/loki/releases/download/v2.9.3/loki-linux-amd64.zip
unzip loki-linux-amd64.zip
sudo mv loki-linux-amd64 /usr/local/bin/loki
sudo chmod +x /usr/local/bin/loki

Configure S3 storage credentials

Create AWS credentials file for S3 access. Replace the values with your actual S3 credentials.

[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
sudo mkdir -p /home/loki/.aws
sudo chown -R loki:loki /home/loki/.aws
sudo chmod 700 /home/loki/.aws
sudo chmod 600 /home/loki/.aws/credentials

Create Loki configuration with S3 backend

Configure Loki with S3 storage backend, including schema configuration and retention policies.

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
  log_level: info

ingester:
  wal:
    enabled: true
    dir: /var/lib/loki/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 5m
  chunk_retain_period: 30s
  max_transfer_retries: 0
  chunk_encoding: snappy

schema_config:
  configs:
    - from: 2023-01-01
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

storage_config:
  boltdb_shipper:
    active_index_directory: /var/lib/loki/index
    shared_store: s3
    cache_location: /var/lib/loki/boltdb-cache
  
  aws:
    s3: s3://your-loki-bucket
    region: us-east-1
    s3forcepathstyle: false

compactor:
  working_directory: /var/lib/loki/compactor
  shared_store: s3
  compaction_interval: 5m

limits_config:
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  ingestion_rate_mb: 10
  ingestion_burst_size_mb: 20
  per_stream_rate_limit: 5MB
  per_stream_rate_limit_burst: 20MB

table_manager:
  retention_deletes_enabled: true
  retention_period: 2160h  # 90 days

chunk_store_config:
  max_look_back_period: 0s

query_range:
  align_queries_with_step: true
  max_retries: 5
  split_queries_by_interval: 15m
  cache_results: true
  results_cache:
    cache:
      enable_fifocache: true
      fifocache:
        max_size_items: 1024
        validity: 24h

ruler:
  storage:
    type: local
    local:
      directory: /var/lib/loki/rules
  rule_path: /var/lib/loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

Set configuration file permissions

Ensure the Loki configuration file has correct ownership and permissions for security.

sudo chown loki:loki /etc/loki/config.yml
sudo chmod 644 /etc/loki/config.yml

Create S3 bucket for Loki storage

Create the S3 bucket referenced in your configuration. Replace with your actual bucket name and region.

aws s3 mb s3://your-loki-bucket --region us-east-1
aws s3api put-bucket-versioning --bucket your-loki-bucket --versioning-configuration Status=Enabled

Create systemd service file

Create a systemd service to manage Loki as a system service with proper resource limits and restart policies.

[Unit]
Description=Loki log aggregation system
After=network.target

[Service]
Type=simple
User=loki
Group=loki
ExecStart=/usr/local/bin/loki -config.file=/etc/loki/config.yml
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=loki
KillMode=mixed
KillSignal=SIGTERM

Resource limits

LimitNOFILE=65536 LimitNPROC=4096

Security settings

NoNewPrivileges=true PrivateTmp=true ProtectSystem=strict ProtectHome=true ReadWritePaths=/var/lib/loki /var/log/loki

Environment

Environment=HOME=/home/loki [Install] WantedBy=multi-user.target

Enable and start Loki service

Reload systemd configuration and start Loki service with automatic startup on boot.

sudo systemctl daemon-reload
sudo systemctl enable loki
sudo systemctl start loki
sudo systemctl status loki

Configure log rotation

Set up log rotation to prevent Loki logs from consuming too much disk space.

/var/log/loki/*.log {
    daily
    missingok
    rotate 30
    compress
    delaycompress
    notifempty
    create 644 loki loki
    postrotate
        systemctl reload loki > /dev/null 2>&1 || true
    endscript
}

Configure firewall rules

Open the necessary ports for Loki HTTP API and gRPC communication.

sudo ufw allow 3100/tcp comment 'Loki HTTP API'
sudo ufw allow 9096/tcp comment 'Loki gRPC'
sudo ufw reload
sudo firewall-cmd --permanent --add-port=3100/tcp
sudo firewall-cmd --permanent --add-port=9096/tcp
sudo firewall-cmd --reload

Configure retention and monitoring

Set up retention policies

Configure automatic cleanup of old log data to manage storage costs and compliance requirements.

# Retention configuration for different log streams
global:
  retention_period: 2160h  # 90 days default

stream_retention:
  - selector: '{job="nginx"}'
    priority: 1
    period: 720h  # 30 days for nginx logs
  
  - selector: '{job="application", level="error"}'
    priority: 2
    period: 4320h  # 180 days for error logs
  
  - selector: '{job="security"}'
    priority: 3
    period: 8760h  # 1 year for security logs

Create monitoring script

Set up a monitoring script to check Loki health and S3 connectivity.

#!/bin/bash

Loki health check script

LOKI_URL="http://localhost:3100" LOG_FILE="/var/log/loki/health-check.log"

Check Loki API endpoint

if curl -s "$LOKI_URL/ready" > /dev/null; then echo "$(date): Loki API healthy" >> "$LOG_FILE" else echo "$(date): Loki API unhealthy" >> "$LOG_FILE" exit 1 fi

Check S3 connectivity

if aws s3 ls s3://your-loki-bucket > /dev/null 2>&1; then echo "$(date): S3 connectivity OK" >> "$LOG_FILE" else echo "$(date): S3 connectivity failed" >> "$LOG_FILE" exit 1 fi

Check disk space

DISK_USAGE=$(df /var/lib/loki | awk 'NR==2 {print $5}' | sed 's/%//') if [ "$DISK_USAGE" -gt 85 ]; then echo "$(date): WARNING: Disk usage at ${DISK_USAGE}%" >> "$LOG_FILE" fi echo "$(date): Health check completed successfully" >> "$LOG_FILE"
sudo chmod +x /usr/local/bin/loki-health-check.sh
sudo chown loki:loki /usr/local/bin/loki-health-check.sh

Schedule health checks

Add a cron job to run health checks every 5 minutes and alert on failures.

sudo crontab -u loki -e

Add this line to the crontab:

/5    * /usr/local/bin/loki-health-check.sh

Configure log shipping with Promtail

Install Promtail

Download and install Promtail to ship logs from your servers to Loki.

cd /tmp
wget https://github.com/grafana/loki/releases/download/v2.9.3/promtail-linux-amd64.zip
unzip promtail-linux-amd64.zip
sudo mv promtail-linux-amd64 /usr/local/bin/promtail
sudo chmod +x /usr/local/bin/promtail

Configure Promtail

Create Promtail configuration to collect and forward logs to Loki.

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /var/lib/loki/positions.yaml

clients:
  - url: http://localhost:3100/loki/api/v1/push

scrape_configs:
  - job_name: system
    static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          __path__: /var/log/*.log
  
  - job_name: nginx
    static_configs:
      - targets:
          - localhost
        labels:
          job: nginx
          __path__: /var/log/nginx/*.log
  
  - job_name: systemd
    journal:
      max_age: 12h
      labels:
        job: systemd-journal
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'

Create Promtail systemd service

Set up Promtail as a systemd service for automatic log shipping.

[Unit]
Description=Promtail service
After=network.target

[Service]
Type=simple
User=loki
Group=loki
ExecStart=/usr/local/bin/promtail -config.file=/etc/loki/promtail.yml
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=promtail

[Install]
WantedBy=multi-user.target
sudo chown loki:loki /etc/loki/promtail.yml
sudo systemctl daemon-reload
sudo systemctl enable promtail
sudo systemctl start promtail
sudo systemctl status promtail

Verify your setup

Test Loki API connectivity and verify logs are being stored in S3.

# Check Loki service status
sudo systemctl status loki

Test Loki API endpoints

curl -s http://localhost:3100/ready curl -s http://localhost:3100/metrics

Check if logs are being ingested

curl -G -s "http://localhost:3100/loki/api/v1/query" --data-urlencode 'query={job="varlogs"}' | jq

Verify S3 bucket contents

aws s3 ls s3://your-loki-bucket/loki/ --recursive

Check Promtail is shipping logs

sudo systemctl status promtail curl -s http://localhost:9080/metrics
Note: If you don't have jq installed, install it with sudo apt install jq (Ubuntu/Debian) or sudo dnf install jq (AlmaLinux/Rocky/CentOS).

Common issues

Symptom Cause Fix
Loki won't start - permission denied Incorrect file ownership or permissions sudo chown -R loki:loki /var/lib/loki /etc/loki
S3 access denied errors Invalid AWS credentials or bucket permissions Verify credentials in /home/loki/.aws/credentials and S3 bucket policy
High memory usage Ingestion rate too high or retention too long Adjust ingestion_rate_mb and retention_period in config
Query timeouts Large time range queries or missing indexes Use smaller time ranges and ensure proper schema configuration
Promtail not shipping logs File permissions or path configuration Check __path__ patterns and ensure loki user can read log files
Compactor failures S3 connectivity or insufficient permissions Check S3 bucket policy allows list/delete operations

Performance optimization

Tune Loki for high-volume logging

Optimize configuration for production workloads with high log volume.

# Add to your existing config.yml
ingester:
  chunk_target_size: 1572864  # 1.5MB
  chunk_encoding: snappy
  max_chunk_age: 2h
  
limits_config:
  ingestion_rate_mb: 50        # Increase for high volume
  ingestion_burst_size_mb: 100 # Allow traffic spikes
  max_streams_per_user: 0      # Unlimited streams
  max_line_size: 256KB         # Prevent oversized log lines
  
query_scheduler:
  max_outstanding_requests_per_tenant: 256
  
frontend:
  max_outstanding_per_tenant: 256
  compress_responses: true

Configure S3 lifecycle policies

Set up S3 lifecycle rules to automatically transition older logs to cheaper storage tiers.

{
  "Rules": [
    {
      "ID": "LokiLogLifecycle",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "loki/"
      },
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        },
        {
          "Days": 90,
          "StorageClass": "GLACIER"
        },
        {
          "Days": 365,
          "StorageClass": "DEEP_ARCHIVE"
        }
      ],
      "Expiration": {
        "Days": 2555  # 7 years
      }
    }
  ]
}
aws s3api put-bucket-lifecycle-configuration --bucket your-loki-bucket --lifecycle-configuration file://s3-lifecycle-policy.json

Next steps

Running this in production?

Need help with scale? Setting up Loki with S3 storage is straightforward. Managing retention policies, capacity planning, backup strategies and 24/7 monitoring across environments is the operational challenge. See how we run infrastructure like this for European SaaS and fintech teams.

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.