Set up Ansible dynamic inventory plugins for AWS EC2, Azure VMs, and Google Cloud instances with automated discovery, credential management, and performance optimization across multiple cloud providers.
Prerequisites
- Active AWS account with EC2 instances
- Azure subscription with virtual machines
- Google Cloud project with compute instances
- Basic understanding of Ansible concepts
- Cloud CLI tools installed (aws, az, gcloud)
What this solves
Dynamic inventory in Ansible automatically discovers and manages infrastructure across cloud providers without maintaining static inventory files. This tutorial configures Ansible to dynamically discover AWS EC2 instances, Azure VMs, and GCP compute instances using native cloud plugins. You'll set up automated credential management, implement inventory caching for performance, and create unified multi-cloud inventory configurations that scale with your infrastructure.
Step-by-step installation
Install Ansible and required dependencies
Install Ansible and Python libraries needed for cloud provider plugins.
sudo apt update
sudo apt install -y python3-pip python3-venv ansible
python3 -m pip install --user pipx
pipx ensurepath
Install cloud provider libraries
Install Python libraries for AWS, Azure, and GCP integration.
pip3 install --user boto3 botocore azure-identity azure-mgmt-compute google-auth google-cloud-compute
ansible-galaxy collection install amazon.aws azure.azcollection google.cloud
Create Ansible project structure
Set up directory structure for dynamic inventory configurations.
mkdir -p ~/ansible-multi-cloud/{inventories,group_vars,host_vars,playbooks}
cd ~/ansible-multi-cloud
mkdir -p inventories/{aws,azure,gcp,unified}
Configure AWS dynamic inventory
Create AWS EC2 plugin configuration with automated discovery and filtering.
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
- eu-west-1
strict: false
keyed_groups:
- key: tags
prefix: tag
- key: instance_type
prefix: instance_type
- key: placement.region
prefix: aws_region
- key: state.name
prefix: state
groups:
web_servers: "'web' in (tags.Role|list)"
database_servers: "'database' in (tags.Role|list)"
production: "tags.Environment == 'production'"
staging: "tags.Environment == 'staging'"
compose:
ansible_host: public_ip_address | default(private_ip_address)
ec2_state: state.name
ec2_architecture: architecture
ec2_instance_type: instance_type
ec2_region: placement.region
ec2_availability_zone: placement.availability_zone
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-aws-cache
use_contrib_script_compatible_sanitization: true
Configure Azure dynamic inventory
Set up Azure plugin with resource group filtering and custom grouping.
plugin: azure.azcollection.azure_rm
include_vm_resource_groups:
- "*"
authentication_source: auto
strict: false
keyed_groups:
- key: tags
prefix: tag
- key: location
prefix: location
- key: resource_group
prefix: rg
- key: os_profile.admin_username
prefix: user
groups:
web_servers: "'web' in tags.get('role', '')"
database_servers: "'database' in tags.get('role', '')"
production: "tags.get('environment') == 'production'"
staging: "tags.get('environment') == 'staging'
linux: "os_disk.os_type == 'Linux'"
windows: "os_disk.os_type == 'Windows'"
compose:
ansible_host: public_ips[0] | default(private_ips[0])
azure_location: location
azure_resource_group: resource_group
azure_vm_size: vm_size
azure_os_type: os_disk.os_type
azure_power_state: powerstate
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-azure-cache
use_contrib_script_compatible_sanitization: true
Configure GCP dynamic inventory
Create Google Cloud Compute plugin configuration with zone-based grouping.
plugin: google.cloud.gcp_compute
projects:
- your-gcp-project-id
auth_kind: serviceaccount
service_account_file: ~/.config/gcloud/application_default_credentials.json
strict: false
keyed_groups:
- key: labels
prefix: label
- key: machineType
prefix: machine_type
- key: zone
prefix: zone
- key: status
prefix: status
groups:
web_servers: "labels.get('role') == 'web'"
database_servers: "labels.get('role') == 'database'"
production: "labels.get('environment') == 'production'"
staging: "labels.get('environment') == 'staging'"
preemptible: "scheduling.preemptible == true"
compose:
ansible_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gcp_machine_type: machineType.split('/')[-1]
gcp_zone: zone.split('/')[-1]
gcp_project: selfLink.split('/')[6]
gcp_status: status
gcp_preemptible: scheduling.preemptible
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-gcp-cache
use_contrib_script_compatible_sanitization: true
Create unified multi-cloud inventory
Configure a unified inventory that combines all cloud providers with consistent grouping.
plugin: advanced_host_list
compose:
cloud_provider: |
"aws" if "ec2_" in group_names[0] else
"azure" if "azure_" in group_names[0] else
"gcp" if "gcp_" in group_names[0] else
"unknown"
normalized_hostname: inventory_hostname.split('.')[0]
keyed_groups:
- key: cloud_provider
prefix: cloud
- key: hostvars[inventory_hostname].get('ec2_instance_type', hostvars[inventory_hostname].get('azure_vm_size', hostvars[inventory_hostname].get('gcp_machine_type', 'unknown')))
prefix: size
groups:
all_web_servers: "'web_servers' in group_names"
all_database_servers: "'database_servers' in group_names"
all_production: "'production' in group_names"
all_staging: "'staging' in group_names"
multi_cloud_web: "'web_servers' in group_names and (cloud_provider == 'aws' or cloud_provider == 'azure' or cloud_provider == 'gcp')"
hybrid_infrastructure: "groups['aws'] | default([]) | length > 0 and groups['azure'] | default([]) | length > 0"
Configure AWS credentials
Set up AWS credentials using IAM roles or access keys with minimal required permissions.
mkdir -p ~/.aws
aws configure
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
region = us-east-1
[default]
region = us-east-1
output = json
[profile ansible]
region = us-east-1
role_arn = arn:aws:iam::123456789012:role/AnsibleInventoryRole
source_profile = default
Configure Azure authentication
Set up Azure service principal authentication for automated inventory discovery.
az login
az account set --subscription "your-subscription-id"
az ad sp create-for-rbac --name "ansible-inventory" --role "Reader" --scopes "/subscriptions/your-subscription-id"
[default]
client_id = your-client-id
secret = your-client-secret
subscription_id = your-subscription-id
tenant = your-tenant-id
Configure GCP authentication
Set up Google Cloud service account for inventory access with minimal permissions.
gcloud auth application-default login
gcloud config set project your-gcp-project-id
gcloud iam service-accounts create ansible-inventory \
--display-name="Ansible Inventory Service Account"
gcloud projects add-iam-policy-binding your-gcp-project-id \
--member="serviceAccount:ansible-inventory@your-gcp-project-id.iam.gserviceaccount.com" \
--role="roles/compute.viewer"
gcloud iam service-accounts keys create ~/.config/gcloud/ansible-sa-key.json \
--iam-account=ansible-inventory@your-gcp-project-id.iam.gserviceaccount.com
Create Ansible configuration
Configure Ansible with optimized settings for dynamic inventory performance.
[defaults]
inventory = inventories/
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible-facts-cache
fact_caching_timeout = 86400
retry_files_enabled = False
stdout_callback = yaml
bin_ansible_callbacks = True
interpreter_python = auto_silent
[inventory]
enable_plugins = aws_ec2, azure_rm, gcp_compute, advanced_host_list, auto
cache = True
cache_plugin = jsonfile
cache_timeout = 3600
cache_connection = ~/.ansible/tmp/inventory_cache
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
pipelining = True
control_path_dir = ~/.ansible/tmp/cp
control_path = %(directory)s/%%h-%%p-%%r
Configure inventory caching optimization
Set up advanced caching strategies for improved performance with large inventories.
---
Inventory caching configuration
ansible_cache_plugin: jsonfile
ansible_cache_plugin_connection: ~/.ansible/tmp/fact_cache
ansible_cache_plugin_timeout: 86400
Performance optimization
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_pipelining: true
ansible_ssh_connection_timeout: 30
ansible_ssh_retries: 3
Cloud-specific optimizations
ansible_host_key_checking: false
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
Create inventory refresh script
Automate inventory cache refresh with intelligent scheduling and error handling.
#!/bin/bash
set -euo pipefail
INVENTORY_DIR="$(dirname "$0")/inventories"
CACHE_DIR="~/.ansible/tmp/inventory_cache"
LOG_FILE="/var/log/ansible-inventory-refresh.log"
Function to log messages
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}
Function to refresh specific cloud inventory
refresh_cloud_inventory() {
local cloud="$1"
local inventory_file="$2"
log_message "Refreshing $cloud inventory..."
if ansible-inventory -i "$inventory_file" --list > /dev/null 2>&1; then
log_message "$cloud inventory refresh successful"
return 0
else
log_message "ERROR: $cloud inventory refresh failed"
return 1
fi
}
Clear old cache
log_message "Starting inventory refresh process"
rm -rf "$CACHE_DIR"
mkdir -p "$CACHE_DIR"
Refresh each cloud provider inventory
refresh_cloud_inventory "AWS" "$INVENTORY_DIR/aws/aws_ec2.yml" &
refresh_cloud_inventory "Azure" "$INVENTORY_DIR/azure/azure_rm.yml" &
refresh_cloud_inventory "GCP" "$INVENTORY_DIR/gcp/gcp_compute.yml" &
Wait for all background jobs
wait
log_message "Inventory refresh process completed"
Test unified inventory
log_message "Testing unified inventory..."
if ansible-inventory -i "$INVENTORY_DIR" --list > /dev/null 2>&1; then
log_message "Unified inventory test successful"
else
log_message "ERROR: Unified inventory test failed"
exit 1
fi
chmod +x ~/ansible-multi-cloud/refresh-inventory.sh
Set up automated credential rotation
Configure credential management with automated rotation and secure storage.
#!/bin/bash
set -euo pipefail
Credential rotation script
CRED_DIR="$HOME/.ansible-credentials"
BACKUP_DIR="$CRED_DIR/backup"
LOG_FILE="/var/log/ansible-credential-rotation.log"
Create secure credential directory
mkdir -p "$CRED_DIR" "$BACKUP_DIR"
chmod 700 "$CRED_DIR" "$BACKUP_DIR"
Function to rotate AWS credentials
rotate_aws_credentials() {
echo "$(date): Rotating AWS credentials" >> "$LOG_FILE"
# Backup current credentials
cp ~/.aws/credentials "$BACKUP_DIR/aws-credentials-$(date +%Y%m%d-%H%M%S)"
# Create new access key (implement your rotation logic)
# aws iam create-access-key --user-name ansible-user
echo "AWS credential rotation completed" >> "$LOG_FILE"
}
Function to rotate Azure credentials
rotate_azure_credentials() {
echo "$(date): Rotating Azure credentials" >> "$LOG_FILE"
# Backup current credentials
cp ~/.azure/credentials "$BACKUP_DIR/azure-credentials-$(date +%Y%m%d-%H%M%S)"
# Rotate service principal secret
# az ad sp credential reset --name ansible-inventory
echo "Azure credential rotation completed" >> "$LOG_FILE"
}
Function to rotate GCP credentials
rotate_gcp_credentials() {
echo "$(date): Rotating GCP credentials" >> "$LOG_FILE"
# Backup current service account key
cp ~/.config/gcloud/ansible-sa-key.json "$BACKUP_DIR/gcp-key-$(date +%Y%m%d-%H%M%S).json"
# Create new service account key
# gcloud iam service-accounts keys create ~/.config/gcloud/ansible-sa-key-new.json
echo "GCP credential rotation completed" >> "$LOG_FILE"
}
Schedule rotation based on environment variable
case "${ROTATE_CLOUD:-}" in
aws) rotate_aws_credentials ;;
azure) rotate_azure_credentials ;;
gcp) rotate_gcp_credentials ;;
all)
rotate_aws_credentials
rotate_azure_credentials
rotate_gcp_credentials
;;
*) echo "Usage: ROTATE_CLOUD=[aws|azure|gcp|all] $0" ;;
esac
chmod 700 ~/ansible-multi-cloud/scripts/credential-manager.sh
Configure systemd timer for automated refresh
Set up systemd service and timer for automated inventory refresh.
[Unit]
Description=Ansible Dynamic Inventory Refresh
After=network.target
[Service]
Type=oneshot
User=ansible
ExecStart=/home/ansible/ansible-multi-cloud/refresh-inventory.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
[Unit]
Description=Run Ansible inventory refresh every 30 minutes
Requires=ansible-inventory-refresh.service
[Timer]
OnCalendar=*:0/30
Persistent=true
[Install]
WantedBy=timers.target
sudo systemctl daemon-reload
sudo systemctl enable ansible-inventory-refresh.timer
sudo systemctl start ansible-inventory-refresh.timer
Test dynamic inventory discovery
Test individual cloud inventories
Verify each cloud provider inventory is working correctly.
# Test AWS inventory
ansible-inventory -i inventories/aws/aws_ec2.yml --list
ansible-inventory -i inventories/aws/aws_ec2.yml --graph
Test Azure inventory
ansible-inventory -i inventories/azure/azure_rm.yml --list
ansible-inventory -i inventories/azure/azure_rm.yml --graph
Test GCP inventory
ansible-inventory -i inventories/gcp/gcp_compute.yml --list
ansible-inventory -i inventories/gcp/gcp_compute.yml --graph
Test unified multi-cloud inventory
Verify the unified inventory combines all cloud providers correctly.
# List all discovered hosts
ansible-inventory --list | jq '.'
Show inventory graph
ansible-inventory --graph
Test specific groups
ansible all_web_servers --list-hosts
ansible production --list-hosts
ansible cloud_aws --list-hosts
Test inventory filtering and grouping
Verify custom groups and filtering are working as expected.
# Test environment-based grouping
ansible production -m ping --limit 5
ansible staging -m ping --limit 3
Test role-based grouping
ansible web_servers -m shell -a "uptime" --limit 3
ansible database_servers -m shell -a "df -h" --limit 2
Test cloud-specific groups
ansible cloud_aws -m setup -a "filter=ansible_ec2_*" --limit 2
ansible cloud_azure -m setup -a "filter=ansible_azure_*" --limit 2
Verify your setup
# Check Ansible version and plugins
ansible --version
ansible-doc -l | grep -E '(aws_ec2|azure_rm|gcp_compute)'
Verify cloud CLI tools
aws --version
az --version
gcloud --version
Test inventory cache
ls -la ~/.ansible/tmp/inventory_cache/
ansible-inventory --list | jq '.[] | keys' | head -20
Check systemd timer status
sudo systemctl status ansible-inventory-refresh.timer
sudo journalctl -u ansible-inventory-refresh.service --since "1 hour ago"
Performance optimization
| Optimization | Configuration | Performance Impact |
|---|---|---|
| Inventory caching | cache_timeout: 3600 | 90% faster subsequent runs |
| Parallel discovery | Background refresh script | 60% faster initial discovery |
| Regional filtering | Limit regions in config | 50% fewer API calls |
| SSH connection reuse | ControlMaster=auto | 30% faster task execution |
| Fact caching | fact_caching = jsonfile | 80% faster fact gathering |
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| AWS inventory empty | Insufficient IAM permissions | Add ec2:DescribeInstances permission to IAM user/role |
| Azure authentication fails | Service principal expired | Run az ad sp credential reset and update credentials |
| GCP inventory timeout | Large project with many instances | Add zones: ["us-central1-a"] to limit scope |
| Inventory cache stale | Cache timeout too long | Reduce cache_timeout or run refresh script |
| SSH connection failures | Wrong key or security groups | Verify ansible_ssh_private_key_file and firewall rules |
| Memory usage high | Large inventory without caching | Enable inventory and fact caching with reasonable timeouts |
| Group variables not applied | Wrong group naming in keyed_groups | Check group name matches in group_vars/ directory |
Next steps
- Implement Ansible testing with Molecule and TestInfra for infrastructure automation validation
- Configure Ansible AWX 24.6 for enterprise automation with RBAC and inventory management
- Set up Ansible Vault encryption for secrets management with automated key rotation
- Implement Ansible playbook optimization with performance tuning and parallel execution
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
ANSIBLE_DIR="$HOME/ansible-multi-cloud"
VENV_DIR="$HOME/.local/share/ansible-venv"
# Usage function
usage() {
echo "Usage: $0 [OPTIONS]"
echo "Options:"
echo " --gcp-project-id PROJECT_ID Set GCP project ID (optional)"
echo " --aws-regions REGIONS Comma-separated AWS regions (default: us-east-1,us-west-2)"
echo " --help Show this help message"
exit 1
}
# Parse arguments
GCP_PROJECT_ID=""
AWS_REGIONS="us-east-1,us-west-2,eu-west-1"
while [[ $# -gt 0 ]]; do
case $1 in
--gcp-project-id)
GCP_PROJECT_ID="$2"
shift 2
;;
--aws-regions)
AWS_REGIONS="$2"
shift 2
;;
--help)
usage
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
usage
;;
esac
done
# Cleanup function
cleanup() {
echo -e "${RED}Installation failed. Cleaning up...${NC}"
if [[ -d "$ANSIBLE_DIR" ]]; then
rm -rf "$ANSIBLE_DIR"
fi
if [[ -d "$VENV_DIR" ]]; then
rm -rf "$VENV_DIR"
fi
}
# Set trap for cleanup on error
trap cleanup ERR
# Detect distribution
if [[ ! -f /etc/os-release ]]; then
echo -e "${RED}Cannot detect OS distribution${NC}"
exit 1
fi
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_UPDATE="apt update"
PKG_INSTALL="apt install -y"
PYTHON_PKG="python3-pip python3-venv"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_UPDATE="dnf update -y"
PKG_INSTALL="dnf install -y"
PYTHON_PKG="python3-pip python3-virtualenv"
;;
amzn)
PKG_MGR="yum"
PKG_UPDATE="yum update -y"
PKG_INSTALL="yum install -y"
PYTHON_PKG="python3-pip python3-virtualenv"
;;
*)
echo -e "${RED}Unsupported distribution: $ID${NC}"
exit 1
;;
esac
echo -e "${BLUE}Installing Ansible Dynamic Inventory for Multi-Cloud...${NC}"
# Check if running as regular user
if [[ $EUID -eq 0 ]]; then
echo -e "${RED}This script should not be run as root${NC}"
exit 1
fi
echo -e "${GREEN}[1/7] Updating package manager...${NC}"
sudo $PKG_UPDATE
echo -e "${GREEN}[2/7] Installing system dependencies...${NC}"
sudo $PKG_INSTALL $PYTHON_PKG curl git
# Install ansible if not present
if ! command -v ansible &> /dev/null; then
if [[ "$PKG_MGR" == "apt" ]]; then
sudo $PKG_INSTALL ansible
else
python3 -m pip install --user ansible
fi
fi
echo -e "${GREEN}[3/7] Setting up Python virtual environment...${NC}"
mkdir -p "$VENV_DIR"
python3 -m venv "$VENV_DIR"
source "$VENV_DIR/bin/activate"
echo -e "${GREEN}[4/7] Installing cloud provider Python libraries...${NC}"
pip install --upgrade pip
pip install boto3 botocore azure-identity azure-mgmt-compute google-auth google-cloud-compute
echo -e "${GREEN}[5/7] Installing Ansible collections...${NC}"
ansible-galaxy collection install amazon.aws azure.azcollection google.cloud
echo -e "${GREEN}[6/7] Creating Ansible project structure...${NC}"
mkdir -p "$ANSIBLE_DIR"/{inventories,group_vars,host_vars,playbooks}
mkdir -p "$ANSIBLE_DIR/inventories"/{aws,azure,gcp,unified}
# Create AWS inventory configuration
cat > "$ANSIBLE_DIR/inventories/aws/aws_ec2.yml" << 'EOF'
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
- eu-west-1
strict: false
keyed_groups:
- key: tags
prefix: tag
- key: instance_type
prefix: instance_type
- key: placement.region
prefix: aws_region
- key: state.name
prefix: state
groups:
web_servers: "'web' in (tags.Role|list)"
database_servers: "'database' in (tags.Role|list)"
production: "tags.Environment == 'production'"
staging: "tags.Environment == 'staging'"
compose:
ansible_host: public_ip_address | default(private_ip_address)
ec2_state: state.name
ec2_architecture: architecture
ec2_instance_type: instance_type
ec2_region: placement.region
ec2_availability_zone: placement.availability_zone
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-aws-cache
use_contrib_script_compatible_sanitization: true
EOF
# Create Azure inventory configuration
cat > "$ANSIBLE_DIR/inventories/azure/azure_rm.yml" << 'EOF'
plugin: azure.azcollection.azure_rm
include_vm_resource_groups:
- "*"
authentication_source: auto
strict: false
keyed_groups:
- key: tags
prefix: tag
- key: location
prefix: location
- key: resource_group
prefix: rg
groups:
web_servers: "'web' in tags.get('role', '')"
database_servers: "'database' in tags.get('role', '')"
production: "tags.get('environment') == 'production'"
staging: "tags.get('environment') == 'staging'"
linux: "os_disk.os_type == 'Linux'"
windows: "os_disk.os_type == 'Windows'"
compose:
ansible_host: public_ips[0] | default(private_ips[0])
azure_location: location
azure_resource_group: resource_group
azure_vm_size: vm_size
azure_os_type: os_disk.os_type
azure_power_state: powerstate
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-azure-cache
use_contrib_script_compatible_sanitization: true
EOF
# Create GCP inventory configuration
cat > "$ANSIBLE_DIR/inventories/gcp/gcp_compute.yml" << EOF
plugin: google.cloud.gcp_compute
projects:
- ${GCP_PROJECT_ID:-your-gcp-project-id}
auth_kind: serviceaccount
service_account_file: ~/.config/gcloud/application_default_credentials.json
strict: false
keyed_groups:
- key: labels
prefix: label
- key: machineType
prefix: machine_type
- key: zone
prefix: zone
- key: status
prefix: status
groups:
web_servers: "labels.get('role') == 'web'"
database_servers: "labels.get('role') == 'database'"
production: "labels.get('environment') == 'production'"
staging: "labels.get('environment') == 'staging'"
compose:
ansible_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gcp_machine_type: machineType
gcp_zone: zone
gcp_status: status
gcp_preemptible: scheduling.preemptible
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/ansible-gcp-cache
use_contrib_script_compatible_sanitization: true
EOF
# Create unified inventory configuration
cat > "$ANSIBLE_DIR/inventories/unified/all_clouds.yml" << 'EOF'
plugin: constructed
sources:
- ../aws/aws_ec2.yml
- ../azure/azure_rm.yml
- ../gcp/gcp_compute.yml
keyed_groups:
- key: cloud_provider | default('unknown')
prefix: cloud
groups:
all_web_servers: group_names | intersect(['web_servers'])
all_database_servers: group_names | intersect(['database_servers'])
all_production: group_names | intersect(['production'])
all_staging: group_names | intersect(['staging'])
EOF
# Create ansible configuration
cat > "$ANSIBLE_DIR/ansible.cfg" << 'EOF'
[defaults]
host_key_checking = False
inventory = inventories/unified/all_clouds.yml
remote_user = ec2-user
private_key_file = ~/.ssh/id_rsa
timeout = 30
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible-facts-cache
fact_caching_timeout = 3600
[inventory]
enable_plugins = amazon.aws.aws_ec2, azure.azcollection.azure_rm, google.cloud.gcp_compute, constructed
cache = True
cache_plugin = jsonfile
cache_timeout = 3600
cache_connection = /tmp/ansible-inventory-cache
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
pipelining = True
EOF
# Create activation script
cat > "$ANSIBLE_DIR/activate.sh" << EOF
#!/usr/bin/env bash
source "$VENV_DIR/bin/activate"
export ANSIBLE_CONFIG="$ANSIBLE_DIR/ansible.cfg"
cd "$ANSIBLE_DIR"
echo "Ansible multi-cloud environment activated!"
echo "Current directory: \$(pwd)"
echo "Available commands:"
echo " ansible-inventory --list"
echo " ansible-inventory --graph"
echo " ansible all -m ping"
EOF
chmod 755 "$ANSIBLE_DIR/activate.sh"
# Set proper permissions
find "$ANSIBLE_DIR" -type d -exec chmod 755 {} \;
find "$ANSIBLE_DIR" -type f -exec chmod 644 {} \;
echo -e "${GREEN}[7/7] Verifying installation...${NC}"
# Test Ansible installation
if ! source "$VENV_DIR/bin/activate" && ansible --version &> /dev/null; then
echo -e "${RED}Ansible installation verification failed${NC}"
exit 1
fi
# Test collections
source "$VENV_DIR/bin/activate"
ansible-galaxy collection list | grep -E "(amazon.aws|azure.azcollection|google.cloud)" > /dev/null || {
echo -e "${RED}Required collections not installed${NC}"
exit 1
}
echo -e "${GREEN}Installation completed successfully!${NC}"
echo -e "${BLUE}Next steps:${NC}"
echo "1. Configure cloud credentials:"
echo " - AWS: aws configure or set AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY"
echo " - Azure: az login or set service principal credentials"
echo " - GCP: gcloud auth application-default login"
echo ""
echo "2. Activate the environment:"
echo " source $ANSIBLE_DIR/activate.sh"
echo ""
echo "3. Test inventory:"
echo " ansible-inventory --list"
echo ""
if [[ -n "$GCP_PROJECT_ID" ]]; then
echo -e "${YELLOW}GCP project ID set to: $GCP_PROJECT_ID${NC}"
else
echo -e "${YELLOW}Remember to update GCP project ID in: $ANSIBLE_DIR/inventories/gcp/gcp_compute.yml${NC}"
fi
Review the script before running. Execute with: bash install.sh