Set up Ansible dynamic inventory plugins for AWS EC2, Azure, and Google Cloud Platform to automatically discover and manage cloud resources. This tutorial covers authentication, filtering, and unified inventory management across multiple cloud providers.
Prerequisites
- Active AWS, Azure, or GCP account with API access
- Python 3.8 or later installed
- SSH key pairs configured for cloud instances
- Basic understanding of Ansible playbooks and inventory
What this solves
Static inventory files become unmanageable when you have hundreds of cloud instances that scale up and down automatically. Ansible dynamic inventory plugins solve this by automatically discovering your AWS EC2 instances, Azure VMs, and Google Cloud Compute instances at runtime. You can group resources by tags, regions, or instance types without manually updating inventory files.
Step-by-step installation
Update system packages
Start by updating your package manager to ensure you get the latest versions of Python and pip.
sudo apt update && sudo apt upgrade -y
sudo apt install -y python3 python3-pip python3-venv
Install Ansible and cloud SDK dependencies
Install Ansible and the required cloud provider SDKs for inventory plugins to authenticate and query resources.
python3 -m venv ansible-env
source ansible-env/bin/activate
pip install ansible boto3 botocore azure-cli azure-identity google-cloud-compute
Create Ansible configuration directory
Set up a dedicated directory structure for your dynamic inventory configuration and scripts.
mkdir -p ~/ansible-dynamic/{inventories,group_vars,host_vars}
cd ~/ansible-dynamic
Configure AWS EC2 dynamic inventory
Create the AWS EC2 inventory plugin configuration with authentication and filtering options.
---
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
- eu-west-1
keyed_groups:
- prefix: tag
key: tags
- prefix: instance_type
key: instance_type
- prefix: placement
key: placement.region
compose:
ansible_host: public_ip_address | default(private_ip_address)
ec2_state: state.name
ec2_architecture: architecture
filters:
instance-state-name:
- running
- stopped
strict: false
Set up AWS authentication
Configure AWS credentials using environment variables or AWS CLI profiles for secure authentication.
# Option 1: Environment variables
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
Option 2: AWS CLI profile (recommended)
aws configure --profile ansible
echo 'export AWS_PROFILE=ansible' >> ~/.bashrc
Configure Azure dynamic inventory
Set up the Azure inventory plugin to discover virtual machines across subscriptions and resource groups.
---
plugin: azure.azcollection.azure_rm
include_vm_resource_groups:
- production
- staging
- development
auth_source: auto
keyed_groups:
- prefix: tag
key: tags
- prefix: azure_location
key: location
- prefix: azure_resource_group
key: resource_group
- prefix: azure_vm_size
key: vm_size
compose:
ansible_host: public_ipv4_addresses[0] | default(private_ipv4_addresses[0])
azure_powerstate: powerstate
azure_provisioning_state: provisioning_state
conditional_groups:
azure_running: azure_powerstate == "running"
azure_stopped: azure_powerstate == "stopped"
Set up Azure authentication
Authenticate with Azure using service principal credentials or Azure CLI login.
# Option 1: Azure CLI login (interactive)
az login
Option 2: Service principal (automated)
export AZURE_CLIENT_ID="your-client-id"
export AZURE_SECRET="your-client-secret"
export AZURE_SUBSCRIPTION_ID="your-subscription-id"
export AZURE_TENANT="your-tenant-id"
Configure Google Cloud Platform inventory
Create the GCP inventory configuration to discover Compute Engine instances across projects and zones.
---
plugin: google.cloud.gcp_compute
projects:
- my-production-project
- my-staging-project
auth_kind: serviceaccount
service_account_file: ~/gcp-ansible-key.json
zones:
- us-central1-a
- us-central1-b
- europe-west1-b
keyed_groups:
- prefix: gcp_status
key: status
- prefix: gcp_zone
key: zone
- prefix: gcp_machine_type
key: machineType
- prefix: gcp_labels
key: labels
compose:
ansible_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gcp_machine_type_name: machineType.split('/')[-1]
gcp_zone_name: zone.split('/')[-1]
Set up Google Cloud authentication
Create a service account and download the JSON key file for GCP API authentication.
# Create service account (run in Google Cloud Shell or with gcloud CLI)
gcloud iam service-accounts create ansible-inventory \
--display-name="Ansible Dynamic Inventory"
Grant necessary permissions
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:ansible-inventory@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.viewer"
Download key file
gcloud iam service-accounts keys create ~/gcp-ansible-key.json \
--iam-account=ansible-inventory@PROJECT_ID.iam.gserviceaccount.com
chmod 600 ~/gcp-ansible-key.json
Create unified inventory script
Build a wrapper script that combines all cloud provider inventories and applies common filtering rules.
#!/usr/bin/env python3
import json
import subprocess
import sys
from pathlib import Path
def get_inventory(plugin_file):
"""Get inventory from a specific plugin file"""
try:
result = subprocess.run(
['ansible-inventory', '-i', plugin_file, '--list'],
capture_output=True, text=True, check=True
)
return json.loads(result.stdout)
except subprocess.CalledProcessError as e:
print(f"Error getting inventory from {plugin_file}: {e}", file=sys.stderr)
return {}
def merge_inventories(*inventories):
"""Merge multiple inventory dictionaries"""
merged = {'_meta': {'hostvars': {}}}
for inv in inventories:
if not inv:
continue
# Merge hostvars
if '_meta' in inv and 'hostvars' in inv['_meta']:
merged['_meta']['hostvars'].update(inv['_meta']['hostvars'])
# Merge groups
for group, hosts in inv.items():
if group == '_meta':
continue
if group not in merged:
merged[group] = {'hosts': []}
if isinstance(hosts, dict) and 'hosts' in hosts:
merged[group]['hosts'].extend(hosts['hosts'])
elif isinstance(hosts, list):
merged[group] = {'hosts': hosts}
return merged
def main():
inventory_dir = Path(__file__).parent / 'inventories'
# Get inventories from all cloud providers
aws_inv = get_inventory(inventory_dir / 'aws_ec2.yml')
azure_inv = get_inventory(inventory_dir / 'azure_rm.yml')
gcp_inv = get_inventory(inventory_dir / 'gcp_compute.yml')
# Merge all inventories
unified = merge_inventories(aws_inv, azure_inv, gcp_inv)
# Add custom groups
all_hosts = list(unified['_meta']['hostvars'].keys())
unified['all_clouds'] = {'hosts': all_hosts}
# Group by cloud provider
unified['aws_instances'] = {'hosts': [h for h in all_hosts if any(g.startswith('tag_') for g in aws_inv.keys() if h in aws_inv.get(g, {}).get('hosts', []))]}
unified['azure_instances'] = {'hosts': [h for h in all_hosts if any(g.startswith('azure_') for g in azure_inv.keys() if h in azure_inv.get(g, {}).get('hosts', []))]}
unified['gcp_instances'] = {'hosts': [h for h in all_hosts if any(g.startswith('gcp_') for g in gcp_inv.keys() if h in gcp_inv.get(g, {}).get('hosts', []))]}
print(json.dumps(unified, indent=2))
if __name__ == '__main__':
main()
Make the script executable and test
Set proper permissions on the inventory script and verify it can collect data from all cloud providers.
chmod +x ~/ansible-dynamic/unified-inventory.py
Test individual cloud inventories
ansible-inventory -i inventories/aws_ec2.yml --list
ansible-inventory -i inventories/azure_rm.yml --list
ansible-inventory -i inventories/gcp_compute.yml --list
Configure Ansible to use dynamic inventory
Create an Ansible configuration file that uses your dynamic inventory by default.
[defaults]
inventory = ./unified-inventory.py
host_key_checking = False
timeout = 30
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./facts_cache
fact_caching_timeout = 3600
[inventory]
enable_plugins = amazon.aws.aws_ec2, azure.azcollection.azure_rm, google.cloud.gcp_compute
cache = True
cache_plugin = jsonfile
cache_connection = ./inventory_cache
cache_timeout = 300
Create group variables for cloud-specific settings
Set up group variables to handle different SSH keys and connection methods for each cloud provider.
---
ansible_ssh_private_key_file: ~/.ssh/aws-key.pem
ansible_user: ec2-user
cloud_provider: aws
---
ansible_ssh_private_key_file: ~/.ssh/azure-key
ansible_user: azureuser
cloud_provider: azure
---
ansible_ssh_private_key_file: ~/.ssh/gcp-key
ansible_user: ansible
cloud_provider: gcp
Create filtering playbook example
Build a sample playbook that demonstrates how to target specific groups discovered by the dynamic inventory.
---
- name: Update all running instances across clouds
hosts: all_clouds
gather_facts: yes
tasks:
- name: Update package cache (Ubuntu/Debian)
apt:
update_cache: yes
when: ansible_os_family == "Debian"
become: yes
- name: Update package cache (RHEL/CentOS)
yum:
update_cache: yes
when: ansible_os_family == "RedHat"
become: yes
- name: Configure web servers by tag
hosts: tag_role_webserver
become: yes
tasks:
- name: Install nginx
package:
name: nginx
state: present
- name: Start nginx service
systemd:
name: nginx
state: started
enabled: yes
- name: Configure database servers by environment
hosts: tag_environment_production
become: yes
tasks:
- name: Configure production database settings
debug:
msg: "Configuring {{ inventory_hostname }} in {{ cloud_provider }}"
Verify your setup
Test your dynamic inventory configuration by listing discovered hosts and running a simple playbook.
cd ~/ansible-dynamic
source ../ansible-env/bin/activate
List all discovered hosts
ansible-inventory --list
List hosts in specific groups
ansible-inventory --graph
Test connectivity to all hosts
ansible all -m ping
Run the example playbook
ansible-playbook site.yml --check
Advanced filtering and grouping
Create custom inventory filters
Add advanced filtering logic to exclude certain instances or create conditional groups based on multiple criteria.
---
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
filters:
instance-state-name:
- running
tag:Environment:
- production
- staging
Exclude instances tagged for maintenance
exclude_filters:
tag:Status:
- maintenance
- deprecated
keyed_groups:
- prefix: env
key: tags.Environment
- prefix: role
key: tags.Role
- prefix: team
key: tags.Team
compose:
ansible_host: public_ip_address | default(private_ip_address)
instance_name: tags.Name | default('unnamed')
backup_required: tags.Backup | default(false)
conditional_groups:
web_servers: "'webserver' in (tags.Role | default(''))"
database_servers: "'database' in (tags.Role | default(''))"
production_servers: "tags.Environment == 'production'"
backup_enabled: "tags.Backup == 'true'"
Set up inventory caching
Enable inventory caching to speed up repeated playbook runs and reduce API calls to cloud providers.
mkdir -p ~/ansible-dynamic/{inventory_cache,facts_cache}
Test cache functionality
time ansible-inventory --list # First run (slow)
time ansible-inventory --list # Second run (fast, from cache)
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| "No inventory was parsed from the source" | Missing cloud provider SDK | pip install boto3 azure-cli google-cloud-compute |
| AWS authentication failed | Missing or invalid credentials | Check aws configure list and verify IAM permissions |
| Azure login required | Azure CLI not authenticated | az login or set service principal environment variables |
| GCP permission denied | Service account lacks compute.viewer role | Grant roles/compute.viewer to service account |
| Hosts show as unreachable | Wrong SSH key or username | Set correct ansible_user and ansible_ssh_private_key_file in group_vars |
| Inventory script fails | Python dependencies missing | Activate venv: source ansible-env/bin/activate |
| Slow inventory loading | Cache disabled or expired | Enable caching in ansible.cfg and check cache_timeout settings |
Performance optimization
Optimize inventory queries
Reduce API calls and speed up inventory discovery by limiting regions and using specific filters.
---
plugin: amazon.aws.aws_ec2
Only query regions you actually use
regions:
- us-east-1
Use specific filters to reduce results
filters:
instance-state-name:
- running
tag:Managed:
- ansible
Cache results for 5 minutes
cache: true
cache_timeout: 300
Parallel inventory execution
Create a script to query multiple cloud providers in parallel for faster inventory collection.
#!/usr/bin/env python3
import json
import subprocess
import concurrent.futures
from pathlib import Path
def get_cloud_inventory(inventory_file):
"""Get inventory from cloud provider in parallel"""
try:
result = subprocess.run(
['ansible-inventory', '-i', inventory_file, '--list'],
capture_output=True, text=True, check=True, timeout=60
)
return json.loads(result.stdout)
except (subprocess.CalledProcessError, subprocess.TimeoutExpired):
return {}
def main():
inventory_files = [
'inventories/aws_ec2.yml',
'inventories/azure_rm.yml',
'inventories/gcp_compute.yml'
]
# Execute inventory queries in parallel
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
future_to_cloud = {executor.submit(get_cloud_inventory, f): f for f in inventory_files}
inventories = []
for future in concurrent.futures.as_completed(future_to_cloud):
inventory = future.result()
if inventory:
inventories.append(inventory)
# Merge and output results
merged = {'_meta': {'hostvars': {}}}
for inv in inventories:
if '_meta' in inv:
merged['_meta']['hostvars'].update(inv['_meta']['hostvars'])
for group, hosts in inv.items():
if group != '_meta':
if group not in merged:
merged[group] = hosts
print(json.dumps(merged, indent=2))
if __name__ == '__main__':
main()
chmod +x ~/ansible-dynamic/parallel-inventory.py
Integration with CI/CD
Create environment-specific inventories
Set up different inventory configurations for development, staging, and production environments.
---
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
filters:
instance-state-name:
- running
tag:Environment:
- production
keyed_groups:
- prefix: prod
key: tags.Role
compose:
ansible_host: private_ip_address
# Use private IPs in production for security
# Use specific inventory in playbooks
ansible-playbook -i inventories/production.yml deploy.yml
ansible-playbook -i inventories/staging.yml test.yml
Next steps
- Implement Ansible testing with Molecule and TestInfra for infrastructure automation validation
- Configure Ansible AWX for enterprise automation with RBAC and inventory management
- Configure Ansible Vault integration with HashiCorp Vault for secure secrets management
- Implement Ansible custom modules and plugins for specialized automation tasks
- Set up Ansible playbook performance monitoring with callback plugins and metrics collection
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
ANSIBLE_USER="${1:-$USER}"
INSTALL_DIR="$HOME/ansible-dynamic"
VENV_DIR="$HOME/ansible-env"
# Usage function
usage() {
echo "Usage: $0 [ansible_user]"
echo " ansible_user: User to configure Ansible for (default: current user)"
echo ""
echo "Example: $0 ansible"
exit 1
}
# Logging functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Cleanup function for rollback
cleanup() {
log_error "Installation failed. Cleaning up..."
if [ -d "$VENV_DIR" ]; then
rm -rf "$VENV_DIR"
fi
if [ -d "$INSTALL_DIR" ]; then
rm -rf "$INSTALL_DIR"
fi
exit 1
}
trap cleanup ERR
# Detect distribution
detect_distro() {
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_UPDATE="apt update"
PKG_INSTALL="apt install -y"
PYTHON_PKG="python3 python3-pip python3-venv python3-dev build-essential"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_UPDATE="dnf update -y"
PKG_INSTALL="dnf install -y"
PYTHON_PKG="python3 python3-pip python3-devel gcc"
;;
amzn)
PKG_MGR="yum"
PKG_UPDATE="yum update -y"
PKG_INSTALL="yum install -y"
PYTHON_PKG="python3 python3-pip python3-devel gcc"
;;
*)
log_error "Unsupported distribution: $ID"
exit 1
;;
esac
else
log_error "Cannot detect distribution. /etc/os-release not found."
exit 1
fi
log_info "Detected distribution: $ID"
}
# Check prerequisites
check_prerequisites() {
if [ "$EUID" -eq 0 ]; then
log_error "Please run this script as a regular user, not root"
exit 1
fi
if ! command -v sudo &> /dev/null; then
log_error "sudo is required but not installed"
exit 1
fi
log_success "Prerequisites check passed"
}
# Update system packages
update_system() {
log_info "[1/8] Updating system packages..."
sudo $PKG_UPDATE
log_success "System packages updated"
}
# Install Python and dependencies
install_python() {
log_info "[2/8] Installing Python and dependencies..."
sudo $PKG_INSTALL $PYTHON_PKG
log_success "Python and dependencies installed"
}
# Create Python virtual environment
create_venv() {
log_info "[3/8] Creating Python virtual environment..."
if [ -d "$VENV_DIR" ]; then
log_warning "Virtual environment already exists. Removing..."
rm -rf "$VENV_DIR"
fi
python3 -m venv "$VENV_DIR"
source "$VENV_DIR/bin/activate"
pip install --upgrade pip
log_success "Virtual environment created at $VENV_DIR"
}
# Install Ansible and cloud SDKs
install_ansible() {
log_info "[4/8] Installing Ansible and cloud provider SDKs..."
source "$VENV_DIR/bin/activate"
pip install ansible boto3 botocore azure-cli azure-identity google-cloud-compute
ansible-galaxy collection install amazon.aws azure.azcollection google.cloud
log_success "Ansible and cloud SDKs installed"
}
# Create directory structure
create_directories() {
log_info "[5/8] Creating Ansible directory structure..."
mkdir -p "$INSTALL_DIR"/{inventories,group_vars,host_vars,playbooks}
chmod 755 "$INSTALL_DIR"
chmod 755 "$INSTALL_DIR"/{inventories,group_vars,host_vars,playbooks}
log_success "Directory structure created at $INSTALL_DIR"
}
# Create AWS inventory configuration
create_aws_inventory() {
log_info "[6/8] Creating AWS EC2 dynamic inventory configuration..."
cat > "$INSTALL_DIR/inventories/aws_ec2.yml" << 'EOF'
---
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
- eu-west-1
keyed_groups:
- prefix: tag
key: tags
- prefix: instance_type
key: instance_type
- prefix: placement
key: placement.region
compose:
ansible_host: public_ip_address | default(private_ip_address)
ec2_state: state.name
ec2_architecture: architecture
filters:
instance-state-name:
- running
strict: false
EOF
chmod 644 "$INSTALL_DIR/inventories/aws_ec2.yml"
log_success "AWS inventory configuration created"
}
# Create Azure inventory configuration
create_azure_inventory() {
log_info "[7/8] Creating Azure dynamic inventory configuration..."
cat > "$INSTALL_DIR/inventories/azure_rm.yml" << 'EOF'
---
plugin: azure.azcollection.azure_rm
auth_source: auto
keyed_groups:
- prefix: tag
key: tags
- prefix: azure_location
key: location
- prefix: azure_resource_group
key: resource_group
- prefix: azure_vm_size
key: vm_size
compose:
ansible_host: public_ipv4_addresses[0] | default(private_ipv4_addresses[0])
azure_powerstate: powerstate
azure_provisioning_state: provisioning_state
conditional_groups:
azure_running: azure_powerstate == "running"
azure_stopped: azure_powerstate == "stopped"
EOF
chmod 644 "$INSTALL_DIR/inventories/azure_rm.yml"
cat > "$INSTALL_DIR/inventories/gcp_compute.yml" << 'EOF'
---
plugin: google.cloud.gcp_compute
auth_kind: serviceaccount
keyed_groups:
- prefix: gcp_status
key: status
- prefix: gcp_zone
key: zone
- prefix: gcp_machine_type
key: machineType
- prefix: gcp_network
key: networkInterfaces[0].network.name
compose:
ansible_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gcp_project: zone.split('/')[-3]
filters:
- status = "RUNNING"
EOF
chmod 644 "$INSTALL_DIR/inventories/gcp_compute.yml"
log_success "Azure and GCP inventory configurations created"
}
# Create example scripts and documentation
create_examples() {
log_info "[8/8] Creating example scripts and documentation..."
# Create activation script
cat > "$INSTALL_DIR/activate.sh" << EOF
#!/bin/bash
source "$VENV_DIR/bin/activate"
export ANSIBLE_INVENTORY="$INSTALL_DIR/inventories/"
cd "$INSTALL_DIR"
echo "Ansible dynamic inventory environment activated"
echo "Available inventory files:"
ls -la inventories/
EOF
chmod 755 "$INSTALL_DIR/activate.sh"
# Create example ansible.cfg
cat > "$INSTALL_DIR/ansible.cfg" << 'EOF'
[defaults]
inventory = ./inventories/
host_key_checking = False
timeout = 30
gathering = smart
fact_caching = memory
stdout_callback = yaml
[inventory]
enable_plugins = amazon.aws.aws_ec2, azure.azcollection.azure_rm, google.cloud.gcp_compute
EOF
chmod 644 "$INSTALL_DIR/ansible.cfg"
# Create README with instructions
cat > "$INSTALL_DIR/README.md" << 'EOF'
# Ansible Dynamic Inventory Setup
## Quick Start
1. Activate environment: `source ./activate.sh`
2. Configure cloud credentials (see sections below)
3. Test inventory: `ansible-inventory --list`
## AWS Configuration
Export credentials or use AWS CLI profile:
```bash
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"
```
## Azure Configuration
Login with Azure CLI or set service principal:
```bash
az login
# OR
export AZURE_CLIENT_ID="your-client-id"
export AZURE_SECRET="your-client-secret"
export AZURE_SUBSCRIPTION_ID="your-subscription-id"
export AZURE_TENANT="your-tenant-id"
```
## GCP Configuration
Set up service account key:
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
```
## Testing
```bash
ansible-inventory -i inventories/aws_ec2.yml --list
ansible-inventory -i inventories/azure_rm.yml --list
ansible-inventory -i inventories/gcp_compute.yml --list
```
EOF
chmod 644 "$INSTALL_DIR/README.md"
log_success "Example scripts and documentation created"
}
# Verify installation
verify_installation() {
log_info "Verifying installation..."
source "$VENV_DIR/bin/activate"
if ! ansible --version > /dev/null 2>&1; then
log_error "Ansible installation verification failed"
exit 1
fi
if ! python -c "import boto3, azure.identity, google.cloud" > /dev/null 2>&1; then
log_error "Cloud SDK verification failed"
exit 1
fi
if [ ! -f "$INSTALL_DIR/inventories/aws_ec2.yml" ]; then
log_error "Configuration files verification failed"
exit 1
fi
log_success "Installation verification completed successfully"
}
# Main execution
main() {
if [ "$#" -gt 1 ]; then
usage
fi
echo -e "${GREEN}Ansible Dynamic Inventory Installation Script${NC}"
echo "=============================================="
detect_distro
check_prerequisites
update_system
install_python
create_venv
install_ansible
create_directories
create_aws_inventory
create_azure_inventory
create_examples
verify_installation
echo ""
log_success "Ansible dynamic inventory setup completed successfully!"
echo ""
echo -e "${BLUE}Next steps:${NC}"
echo "1. cd $INSTALL_DIR"
echo "2. source ./activate.sh"
echo "3. Configure your cloud credentials (see README.md)"
echo "4. Test with: ansible-inventory --list"
echo ""
echo -e "${YELLOW}Documentation: $INSTALL_DIR/README.md${NC}"
}
main "$@"
Review the script before running. Execute with: bash install.sh