Set up Nexus Repository Manager high availability clustering for production scale

Advanced 90 min Apr 03, 2026 347 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Deploy a production-ready Nexus Repository Manager cluster with shared storage, load balancing, and automated failover for enterprise artifact management and zero-downtime operations.

Prerequisites

  • Root or sudo access on all nodes
  • Nexus Repository Pro license
  • Minimum 4GB RAM per node
  • Network connectivity between nodes
  • Shared storage infrastructure

What this solves

Nexus Repository Manager clustering provides high availability for enterprise artifact repositories, eliminating single points of failure and enabling zero-downtime deployments. This tutorial sets up a production-grade cluster with shared storage, database clustering, and load balancing to handle enterprise-scale Maven, Docker, and npm repository workloads.

Prerequisites and cluster planning

Plan your cluster architecture

Design a minimum 3-node cluster with shared storage and database backend for production reliability.

Note: Nexus clustering requires Nexus Repository Pro license. Each node needs minimum 4GB RAM and 100GB storage.
ComponentMinimum RequirementsRecommended
Nexus nodes3 nodes, 4GB RAM each5 nodes, 8GB RAM each
Shared storageNFS or shared filesystemHigh-performance NAS/SAN
DatabasePostgreSQL 12+ clusterPostgreSQL 15+ with replication
Load balancerHAProxy or NGINXHardware load balancer

Configure hostname resolution

Set up DNS entries or hosts files for all cluster nodes and services.

203.0.113.10    nexus-node1.example.com
203.0.113.11    nexus-node2.example.com
203.0.113.12    nexus-node3.example.com
203.0.113.20    nexus-db-primary.example.com
203.0.113.21    nexus-db-secondary.example.com
203.0.113.30    nexus-cluster.example.com

Create nexus user on all nodes

Create a dedicated user account for running Nexus services with consistent UID across nodes.

sudo groupadd --gid 2000 nexus
sudo useradd --uid 2000 --gid nexus --home /opt/nexus --shell /bin/bash --create-home nexus

Configure shared storage and database

Set up PostgreSQL cluster for Nexus metadata

Install and configure PostgreSQL primary-secondary replication for high availability database backend.

sudo apt update
sudo apt install -y postgresql-15 postgresql-contrib-15
sudo systemctl enable --now postgresql
sudo dnf install -y epel-release
sudo dnf install -y postgresql15-server postgresql15-contrib
sudo /usr/pgsql-15/bin/postgresql-15-setup initdb
sudo systemctl enable --now postgresql-15

Create Nexus database and user

Set up dedicated database and user with appropriate permissions for Nexus clustering.

sudo -u postgres createdb nexus_db
sudo -u postgres psql -c "CREATE USER nexus_user WITH PASSWORD 'secure_nexus_password123!';"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE nexus_db TO nexus_user;"
sudo -u postgres psql -c "ALTER USER nexus_user CREATEDB;"

Configure PostgreSQL for clustering

Enable remote connections and configure authentication for Nexus cluster nodes.

listen_addresses = '*'
max_connections = 200
shared_buffers = 256MB
effective_cache_size = 1GB
wal_level = replica
max_wal_senders = 3
max_replication_slots = 3
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/15/main/archive/%f'
# Add these lines for Nexus cluster access
host    nexus_db        nexus_user      203.0.113.0/24          md5
host    replication     postgres        203.0.113.21/32         md5
sudo mkdir -p /var/lib/postgresql/15/main/archive
sudo chown postgres:postgres /var/lib/postgresql/15/main/archive
sudo systemctl restart postgresql

Set up shared storage with NFS

Configure NFS server for shared blob storage accessible by all Nexus cluster nodes.

sudo apt install -y nfs-kernel-server
sudo mkdir -p /srv/nexus-shared
sudo chown nexus:nexus /srv/nexus-shared
sudo chmod 755 /srv/nexus-shared
sudo dnf install -y nfs-utils
sudo mkdir -p /srv/nexus-shared
sudo chown nexus:nexus /srv/nexus-shared
sudo chmod 755 /srv/nexus-shared
/srv/nexus-shared    203.0.113.0/24(rw,sync,no_subtree_check,no_root_squash)
sudo exportfs -ra
sudo systemctl enable --now nfs-server

Mount shared storage on cluster nodes

Configure NFS client and mount shared storage on all Nexus cluster nodes.

sudo apt install -y nfs-common
sudo mkdir -p /opt/nexus/shared-blobs
sudo mount -t nfs 203.0.113.20:/srv/nexus-shared /opt/nexus/shared-blobs
sudo dnf install -y nfs-utils
sudo mkdir -p /opt/nexus/shared-blobs
sudo mount -t nfs 203.0.113.20:/srv/nexus-shared /opt/nexus/shared-blobs
203.0.113.20:/srv/nexus-shared /opt/nexus/shared-blobs nfs defaults,_netdev 0 0
sudo chown nexus:nexus /opt/nexus/shared-blobs

Set up Nexus cluster nodes

Download and install Nexus Repository Pro

Install Nexus Repository Pro on each cluster node with enterprise clustering features.

cd /tmp
wget https://download.sonatype.com/nexus/professional/nexus-professional-3.45.0-01-unix.tar.gz
sudo tar -xzf nexus-professional-3.45.0-01-unix.tar.gz -C /opt/
sudo mv /opt/nexus-professional-3.45.0-01 /opt/nexus-app
sudo chown -R nexus:nexus /opt/nexus-app
sudo ln -s /opt/nexus-app/bin/nexus /usr/local/bin/nexus

Configure Nexus clustering properties

Set up cluster-specific configuration with shared database and blob storage settings.

# Clustering configuration
nexus.clustered=true
nexus.loadAsOSS=false
nexus.licenseFile=/opt/nexus-app/etc/nexus.lic

Database configuration

nexus.datastore.enabled=true nexus.datastore.nexus.jdbcUrl=jdbc:postgresql://nexus-db-primary.example.com:5432/nexus_db nexus.datastore.nexus.username=nexus_user nexus.datastore.nexus.password=secure_nexus_password123!

Shared blob storage

nexus.blobstore.provisionDefaults=false nexus.cleanup.retainDays=30

Configure cluster node identification

Set unique node identifiers and cluster membership configuration for each node.

# Node 1 configuration (adjust for each node)
nexus.node.id=nexus-node1
nexus.cluster.id=nexus-production-cluster
nexus.hazelcast.discovery.isEnabled=true
nexus.hazelcast.discovery.type=multicast

Network configuration

nexus.hazelcast.network.join.multicast.group=224.2.2.3 nexus.hazelcast.network.join.multicast.port=54327 nexus.hazelcast.network.port=5701

Configure JVM settings for clustering

Optimize JVM settings for cluster performance and memory management.

-Xms4G
-Xmx4G
-XX:MaxDirectMemorySize=2G
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:+UseG1GC
-XX:+UseStringDeduplication
-Djava.net.preferIPv4Stack=true
-Dkaraf.home=.
-Dkaraf.base=.
-Dkaraf.etc=etc/karaf
-Djava.util.logging.config.file=etc/karaf/java.util.logging.properties
-Dkaraf.data=../sonatype-work/nexus3
-Dkaraf.log=../sonatype-work/nexus3/log
-Djava.io.tmpdir=../sonatype-work/nexus3/tmp

Create Nexus systemd service

Configure systemd service for automatic startup and process management.

[Unit]
Description=Nexus Repository Manager
After=network.target postgresql.service

[Service]
Type=forking
LimitNOFILE=65536
ExecStart=/opt/nexus-app/bin/nexus start
ExecStop=/opt/nexus-app/bin/nexus stop
User=nexus
Restart=on-abort
TimeoutSec=600

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable nexus

Configure firewall for cluster communication

Open required ports for Nexus clustering and client access.

sudo ufw allow 8081/tcp comment 'Nexus web interface'
sudo ufw allow 5701:5710/tcp comment 'Hazelcast clustering'
sudo ufw allow 54327/udp comment 'Multicast discovery'
sudo ufw reload
sudo firewall-cmd --permanent --add-port=8081/tcp
sudo firewall-cmd --permanent --add-port=5701-5710/tcp
sudo firewall-cmd --permanent --add-port=54327/udp
sudo firewall-cmd --reload

Configure load balancer and health checks

Install and configure HAProxy load balancer

Set up HAProxy for load balancing across Nexus cluster nodes with health checking.

sudo apt install -y haproxy
sudo dnf install -y haproxy
global
    daemon
    log stdout local0
    maxconn 4096

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
    option httplog
    option dontlognull
    option redispatch
    retries 3

frontend nexus_frontend
    bind *:80
    bind *:443 ssl crt /etc/ssl/certs/nexus-cluster.pem
    redirect scheme https if !{ ssl_fc }
    default_backend nexus_backend

backend nexus_backend
    balance roundrobin
    option httpchk GET /service/rest/v1/status
    http-check expect status 200
    server nexus-node1 203.0.113.10:8081 check inter 10s fall 3 rise 2
    server nexus-node2 203.0.113.11:8081 check inter 10s fall 3 rise 2
    server nexus-node3 203.0.113.12:8081 check inter 10s fall 3 rise 2

stats enable
stats uri /haproxy-stats
stats refresh 30s
stats admin if TRUE

Configure SSL certificates

Set up SSL certificates for secure access to the Nexus cluster. For more advanced certificate management, see our HAProxy SSL configuration tutorial.

sudo mkdir -p /etc/ssl/private
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout /etc/ssl/private/nexus-cluster.key \
  -out /etc/ssl/certs/nexus-cluster.crt \
  -subj "/C=US/ST=State/L=City/O=Organization/CN=nexus-cluster.example.com"
sudo cat /etc/ssl/certs/nexus-cluster.crt /etc/ssl/private/nexus-cluster.key > /etc/ssl/certs/nexus-cluster.pem
sudo chmod 600 /etc/ssl/certs/nexus-cluster.pem

Start cluster services

Start all services in the correct order and verify cluster formation.

# Start HAProxy
sudo systemctl enable --now haproxy

Start Nexus on each node (run on each cluster node)

sudo systemctl start nexus

Wait 2 minutes for startup, then check status

sudo systemctl status nexus

Configure cluster blob stores

Set up shared blob storage configuration through Nexus web interface on any cluster node.

Note: Access https://nexus-cluster.example.com and log in with admin/admin123 (change on first login).

Navigate to Administration → Repository → Blob Stores and create a File blob store with path /opt/nexus/shared-blobs.

Verify your setup

# Check Nexus cluster status
curl -u admin:password123 http://nexus-cluster.example.com/service/rest/v1/status

Verify cluster nodes

curl -u admin:password123 http://nexus-cluster.example.com/service/rest/v1/nodes

Check HAProxy stats

curl http://nexus-cluster.example.com/haproxy-stats

Test database connectivity

sudo -u nexus psql -h nexus-db-primary.example.com -U nexus_user -d nexus_db -c "SELECT version();"

Verify shared storage

ls -la /opt/nexus/shared-blobs df -h /opt/nexus/shared-blobs
Note: All cluster nodes should report as healthy in the HAProxy stats page and Nexus cluster status endpoint.

Common issues

SymptomCauseFix
Cluster nodes not joiningFirewall blocking multicastOpen ports 5701-5710/tcp and 54327/udp
Database connection failuresPostgreSQL authenticationCheck pg_hba.conf and user permissions
Shared storage mount failsNFS service not runningsudo systemctl restart nfs-server
HAProxy shows nodes downHealth check failingVerify Nexus is listening on port 8081
License activation failsMissing license filePlace license in /opt/nexus-app/etc/nexus.lic
High memory usageInsufficient JVM heapIncrease -Xmx setting in nexus.vmoptions

Next steps

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.