Azure VM Deployment Guide
Step-by-step instructions for deploying the MOP Automation Platform on an Azure Virtual Machine within the Hub VNet.
One-Click Installer Package
Download everything you need in a single file. The installer package includes all application files, playbooks, templates, configs, documentation, and a setup script that does the entire deployment for you.
Quick Install (4 Commands)
# 1. Upload the installer to your Azure VM
scp mop-platform-installer-*.tar.gz azureuser@<vm-ip>:~/
# 2. SSH into the VM
ssh azureuser@<vm-ip>
# 3. Extract the package
tar -xzf mop-platform-installer-*.tar.gz
# 4. Run the installer (does everything)
cd mop-platform-installer && sudo bash setup.sh
The installer script handles:
/opt/mop-platform/
Creates Python venv with all dependencies
Generates secure session secret
Creates mount point at /mnt/mopsets
Systemd service (auto-start on boot)
Nginx reverse proxy with route-based forwarding
Firewall rules (HTTP/HTTPS)
Shell aliases (mopstatus, mophealth, moplive, etc.)
mophealth to verify everything is running.
Deployment Architecture
The MOP Automation Platform runs on a single Azure VM inside the Hub VNet with direct VNet peering to all regional spoke networks.
┌──────────────────────────────────────────────────────┐
│ Hub VNet (Azure) │
│ │
│ ┌────────────────────────────────────────────┐ │
│ │ MOP Platform VM (RHEL 9) │ │
│ │ │ │
│ │ ┌───────────┐ ┌───────────────────┐ │ │
│ │ │ Nginx │──▶│ Next.js (3000) │ │ │
│ │ │ (80/443) │ │ Frontend │ │ │
│ │ │ │──▶│ Gunicorn (5000) │ │ │
│ │ │ │ │ Flask Backend │ │ │
│ │ └───────────┘ └───────────────────┘ │ │
│ │ │ │ │
│ │ ┌───────────┴───────────┐ │ │
│ │ │ │ │ │
│ │ ┌──────────▼──┐ ┌────────────────▼┐ │ │
│ │ │ Ansible CLI │ │ /mnt/mopsets │ │ │
│ │ │ (playbooks) │ │ (Azure Files │ │ │
│ │ └──────┬──────┘ │ or NFS share) │ │ │
│ │ │ └─────────────────┘ │ │
│ └──────────┼────────────────────────────────┘ │
│ │ │
└──────────────┼─────────────────────────────────────┘
│ VNet Peering / SSH
┌─────────────────┼──────────────────────────┐
│ │ │
┌──────▼──────┐ ┌───────▼──────┐ ┌───────────────▼───┐
│ eus2 Spoke │ │ wus2 Spoke │ │ wus3/scus/LEA │
│ (Targets) │ │ (Targets) │ │ Spokes │
└─────────────┘ └──────────────┘ └───────────────────┘
Standard_D2s_v3 or larger
RHEL 9
2 vCPU, 8 GB RAM, 64 GB disk
Hub VNet with peering to all spokes
1. Prerequisites
Access Requirements
- Azure subscription with Contributor role on the target resource group
- SSH key pair for VM authentication
- Azure DevOps PAT tokens for each regional organization (eus2, wus2, wus3, scus, eus2lea, wus2lea)
- Network team approval for Hub VNet subnet allocation
Tools Required (on your workstation)
az— Azure CLI (2.50+)ssh— SSH clientscp— for file transfer (or use Azure Files)git— to clone the application repository
2. Azure VM Provisioning
Option A: Azure CLI
# Set variables
RESOURCE_GROUP="rg-mop-platform"
VM_NAME="vm-mop-platform"
LOCATION="eastus2"
VNET_NAME="vnet-hub"
SUBNET_NAME="snet-mop-platform"
IMAGE="RedHat:RHEL:9-lvm-gen2:latest"
# Create resource group (if needed)
az group create --name $RESOURCE_GROUP --location $LOCATION
# Create subnet in the Hub VNet (coordinate with network team)
az network vnet subnet create \
--resource-group $RESOURCE_GROUP \
--vnet-name $VNET_NAME \
--name $SUBNET_NAME \
--address-prefix 10.0.10.0/24
# Create the VM
az vm create \
--resource-group $RESOURCE_GROUP \
--name $VM_NAME \
--image $IMAGE \
--size Standard_D2s_v3 \
--vnet-name $VNET_NAME \
--subnet $SUBNET_NAME \
--admin-username azureuser \
--ssh-key-values ~/.ssh/id_rsa.pub \
--os-disk-size-gb 64 \
--public-ip-address "" \
--nsg ""
# Note: No public IP assigned - access via VPN/Bastion/jumpbox
Option B: Azure Portal
- Navigate to Virtual Machines → Create
- Select Red Hat Enterprise Linux 9 image
- Size: Standard_D2s_v3 (2 vCPU, 8 GB RAM)
- Authentication: SSH public key
- Networking: Select Hub VNet and the designated subnet
- Public IP: None (internal access only)
- OS Disk: 64 GB Premium SSD
- Review + Create
Standard_D2s_v3 (2 vCPU, 8 GB RAM) is sufficient.
If running concurrent Ansible playbooks across many hosts, consider Standard_D4s_v3 (4 vCPU, 16 GB RAM).
3. Networking & VNet Peering
The MOP Platform VM must have network connectivity to all target hosts in the six regional spoke VNets. This is achieved through Hub-Spoke VNet peering.
Required VNet Peerings
| Region | Spoke VNet | ADO Organization | Purpose |
|---|---|---|---|
| East US 2 | vnet-eus2-spoke | eus2 | Production |
| West US 2 | vnet-wus2-spoke | wus2 | Production |
| West US 3 | vnet-wus3-spoke | wus3 | Production |
| South Central US | vnet-scus-spoke | scus | Production |
| East US 2 LEA | vnet-eus2lea-spoke | eus2lea | Early Access |
| West US 2 LEA | vnet-wus2lea-spoke | wus2lea | Early Access |
Verify Peering Connectivity
# From the MOP Platform VM, verify connectivity to each spoke
# Replace with actual target host IPs in each region
# Test SSH connectivity to regional targets
ssh -o ConnectTimeout=5 azureuser@10.1.0.10 hostname # eus2
ssh -o ConnectTimeout=5 azureuser@10.2.0.10 hostname # wus2
ssh -o ConnectTimeout=5 azureuser@10.3.0.10 hostname # wus3
ssh -o ConnectTimeout=5 azureuser@10.4.0.10 hostname # scus
ssh -o ConnectTimeout=5 azureuser@10.5.0.10 hostname # eus2lea
ssh -o ConnectTimeout=5 azureuser@10.6.0.10 hostname # wus2lea
NSG Rules Required
| Direction | Port | Protocol | Source/Dest | Purpose |
|---|---|---|---|---|
| Inbound | 443 | TCP | Corporate VPN / Bastion | HTTPS access to web UI |
| Inbound | 22 | TCP | Bastion / Jumpbox | SSH administration |
| Outbound | 22 | TCP | All spoke VNets | Ansible SSH to targets |
| Outbound | 443 | TCP | dev.azure.com | Azure DevOps API access |
| Outbound | 445 | TCP | Azure Files | SMB mount for MOP source files |
4. Operating System Setup
SSH into the VM and run the following (RHEL 9):
# Update system packages
sudo dnf update -y
# Enable EPEL repository (for extra packages)
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
# Install required system packages
sudo dnf install -y \
python3.11 \
python3.11-pip \
python3.11-devel \
git \
nginx \
ansible-core \
sshpass \
jq \
cifs-utils \
nfs-utils \
unzip \
tar \
openssl
# Set Python 3.11 as default (if not already)
sudo alternatives --set python3 /usr/bin/python3.11
# Verify Python version (3.11+ required)
python3 --version
# Verify Ansible is installed
ansible --version
# Enable and start firewalld (RHEL default firewall)
sudo systemctl enable --now firewalld
# Set SELinux to allow Nginx to proxy to backend ports
sudo setsebool -P httpd_can_network_connect 1
# Create application user (optional, recommended for production)
sudo useradd -m -s /bin/bash mopuser
sudo usermod -aG wheel mopuser
dnf as the package manager, firewalld instead of UFW, and SELinux is enabled by default. The setsebool command above is required for Nginx to proxy to Gunicorn.
5. Application Installation
# Clone or copy the application to the server
cd /opt
sudo mkdir mop-platform
sudo chown azureuser:azureuser mop-platform
cd mop-platform
# Option 1: Clone from git repository
git clone <your-repo-url> .
# Option 2: SCP the files from your workstation
# scp -r ./mop-platform/* azureuser@<vm-ip>:/opt/mop-platform/
# Create Python virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Python dependencies
pip install --upgrade pip
pip install flask flask-sqlalchemy gunicorn pyyaml jinja2 requests \
psycopg2-binary email-validator
# Or if a requirements.txt or pyproject.toml is provided:
# pip install -r requirements.txt
# pip install .
# Create required directories
mkdir -p logs vars playbooks uploads/gz mops rendered prerendered configs archive
# Generate a secret key and save it
SECRET_KEY=$(python3 -c "import secrets; print(secrets.token_hex(32))")
echo "SESSION_SECRET=\"${SECRET_KEY}\"" | sudo tee -a /etc/environment
# Load into current session
export SESSION_SECRET="${SECRET_KEY}"
# Test the application starts (Flask backend on port 5000)
gunicorn --bind 127.0.0.1:5000 --workers 3 main:app
# Press Ctrl+C to stop after verifying
# Note: Next.js frontend runs on port 3000
# Nginx reverse-proxies both: /api/* → 5000, / → 3000
Application Directory Structure
/opt/mop-platform/
├── main.py # Application entry point
├── app.py # Flask application
├── renderer.py # MOP rendering engine
├── executor.py # Ansible execution engine
├── config_manager.py # Configuration management
├── gz_processor.py # GZ file processing pipeline
├── mop_classifier.py # MOP auto-classification
├── configs/ # JSON configuration files
│ ├── system_config.json # System settings
│ └── prerender_map.json # Prerender text map
├── static/ # Locally bundled front-end assets
│ ├── css/ # Bootstrap, Font Awesome, CodeMirror CSS
│ ├── js/ # Bootstrap, CodeMirror JS bundles
│ └── webfonts/ # Font Awesome icon fonts
├── templates/ # Flask HTML templates + vendor J2 MOPs
│ └── vendor/ # Extracted vendor MOP versions
├── playbooks/ # Ansible playbooks
├── vars/ # Regional variable files (YAML)
├── logs/ # Execution and system logs
├── uploads/gz/ # Uploaded GZ files
├── rendered/ # Final rendered MOPs (markdown)
├── mops/ # Vendor MOP sets (J2 templates modified in-place by pre-render)
└── inventory/ # Ansible inventory files
└── azure_regions.yml # Regional host inventory
6. Ansible Configuration
SSH Key Distribution
# Generate SSH key for Ansible (if not using existing key)
ssh-keygen -t rsa -b 4096 -f ~/.ssh/azure_rsa -N "" -C "mop-platform-ansible"
# Distribute the public key to all target hosts in each region
# This must be done for every host Ansible will manage
for HOST in 10.1.0.10 10.2.0.10 10.3.0.10 10.4.0.10 10.5.0.10 10.6.0.10; do
ssh-copy-id -i ~/.ssh/azure_rsa.pub azureuser@$HOST
done
# Test passwordless SSH
ansible all -i inventory/azure_regions.yml -m ping
Ansible Configuration File
# Create /opt/mop-platform/ansible.cfg
cat <<'EOF' > /opt/mop-platform/ansible.cfg
[defaults]
inventory = inventory/azure_regions.yml
remote_user = azureuser
private_key_file = ~/.ssh/azure_rsa
host_key_checking = False
timeout = 300
forks = 5
stdout_callback = json
retry_files_enabled = False
log_path = logs/ansible/ansible.log
[privilege_escalation]
become = True
become_method = sudo
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
EOF
Regional Inventory File
# Create inventory/azure_regions.yml
# Update host IPs and connection details for your environment
cat <<'EOF' > inventory/azure_regions.yml
all:
children:
eus2:
hosts:
eus2-host-01:
ansible_host: 10.1.0.10
eus2-host-02:
ansible_host: 10.1.0.11
vars:
azure_region: eastus2
ado_org: eus2
wus2:
hosts:
wus2-host-01:
ansible_host: 10.2.0.10
vars:
azure_region: westus2
ado_org: wus2
wus3:
hosts:
wus3-host-01:
ansible_host: 10.3.0.10
vars:
azure_region: westus3
ado_org: wus3
scus:
hosts:
scus-host-01:
ansible_host: 10.4.0.10
vars:
azure_region: southcentralus
ado_org: scus
eus2lea:
hosts:
eus2lea-host-01:
ansible_host: 10.5.0.10
vars:
azure_region: eastus2
ado_org: eus2lea
is_lea: true
wus2lea:
hosts:
wus2lea-host-01:
ansible_host: 10.6.0.10
vars:
azure_region: westus2
ado_org: wus2lea
is_lea: true
EOF
7. MOP Source Drive / Mount Point
The MOP source drive is where vendor GZ files are stored. This is typically an Azure Files share or NFS mount that the vendor or team uploads MOP sets to.
Option A: Azure Files (SMB) Mount
# Create the mount point directory
sudo mkdir -p /mnt/mopsets
# Create credentials file for Azure Files
sudo mkdir -p /etc/smbcredentials
cat <<EOF | sudo tee /etc/smbcredentials/mopfiles.cred
username=<storage-account-name>
password=<storage-account-key>
EOF
sudo chmod 600 /etc/smbcredentials/mopfiles.cred
# Add to /etc/fstab for persistent mount
echo "//<storage-account>.file.core.windows.net/<share-name> /mnt/mopsets cifs nofail,credentials=/etc/smbcredentials/mopfiles.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab
# Mount immediately
sudo mount -a
# Verify mount
df -h /mnt/mopsets
ls -la /mnt/mopsets
Option B: NFS Mount
# Create mount point
sudo mkdir -p /mnt/mopsets
# Add NFS mount to fstab
echo "<nfs-server-ip>:/exported/path /mnt/mopsets nfs defaults,nofail 0 0" | sudo tee -a /etc/fstab
# Mount
sudo mount -a
df -h /mnt/mopsets
Configure in the Application
After the mount is set up, configure the path in the web UI:
- Go to Admin → MOP Source tab
- Set Mount Point / Source Path to
/mnt/mopsets(or your mount path) - Enable/disable PC File Upload as needed
- Set Max Upload Size (default 500 MB)
- Click Save
8. Systemd Service (Auto-Start)
Create a systemd service so the application starts automatically on boot and restarts on failure.
# Create the systemd service file
sudo cat <<'EOF' > /etc/systemd/system/mop-platform.service
[Unit]
Description=MOP Automation Platform
After=network.target
Wants=network-online.target
[Service]
Type=exec
User=azureuser
Group=azureuser
WorkingDirectory=/opt/mop-platform
Environment="PATH=/opt/mop-platform/venv/bin:/usr/local/bin:/usr/bin:/bin"
Environment="SESSION_SECRET=<your-generated-secret-key>"
ExecStart=/opt/mop-platform/venv/bin/gunicorn \
--bind 127.0.0.1:5000 \
--workers 3 \
--timeout 300 \
--access-logfile logs/access.log \
--error-logfile logs/error.log \
--capture-output \
main:app
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
# Reload systemd, enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable mop-platform
sudo systemctl start mop-platform
# Check status
sudo systemctl status mop-platform
# View logs
sudo journalctl -u mop-platform -f
9. Nginx Reverse Proxy
Nginx sits in front of both services to handle TLS termination, routing, static files, and connection buffering. The frontend (Next.js) runs on port 3000 and the backend (Flask/Gunicorn) runs on port 5000.
# RHEL 9 uses /etc/nginx/conf.d/ (not sites-available)
# Remove the default config if present
sudo rm -f /etc/nginx/conf.d/default.conf
# Create Nginx site configuration
sudo cat <<'EOF' > /etc/nginx/conf.d/mop-platform.conf
upstream frontend {
server 127.0.0.1:3000;
}
upstream backend {
server 127.0.0.1:5000;
}
server {
listen 80;
server_name mop-platform.internal.yourcompany.com;
# Redirect HTTP to HTTPS (uncomment when SSL is configured)
# return 301 https://$server_name$request_uri;
# Support large GZ file uploads
client_max_body_size 500M;
# Next.js frontend routes (port 3000)
# Add specific Next.js page paths here as the frontend grows
location /_next/ {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /nextjs/ {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Static files served directly by Nginx
location /static {
alias /opt/mop-platform/static;
expires 7d;
add_header Cache-Control "public, immutable";
}
# Flask backend - catch-all (port 5000)
# All routes go to Flask by default: /admin, /mops, /logs,
# /execute, /api, /releases, /scheduler, /deployment-guide, etc.
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
}
}
EOF
# Test nginx configuration
sudo nginx -t
# Enable and restart nginx
sudo systemctl enable --now nginx
sudo systemctl restart nginx
10. SSL/TLS Certificate Setup
Option A: Internal CA / Corporate Certificate
# Place your certificate files
sudo mkdir -p /etc/nginx/ssl
sudo cp your-cert.pem /etc/nginx/ssl/mop-platform.crt
sudo cp your-key.pem /etc/nginx/ssl/mop-platform.key
sudo chmod 600 /etc/nginx/ssl/mop-platform.key
# Update Nginx config - add SSL server block
# Edit /etc/nginx/conf.d/mop-platform.conf
# Replace the listen 80 server block with:
sudo cat <<'EOF' > /etc/nginx/conf.d/mop-platform-ssl.conf
server {
listen 80;
server_name mop-platform.internal.yourcompany.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name mop-platform.internal.yourcompany.com;
ssl_certificate /etc/nginx/ssl/mop-platform.crt;
ssl_certificate_key /etc/nginx/ssl/mop-platform.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 500M;
# Next.js frontend routes (port 3000)
location /_next/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /nextjs/ {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static {
alias /opt/mop-platform/static;
expires 7d;
}
# Flask backend - catch-all (port 5000)
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
}
}
EOF
# Remove the HTTP-only config if using SSL
sudo rm -f /etc/nginx/conf.d/mop-platform.conf
sudo nginx -t && sudo systemctl reload nginx
Option B: Self-Signed (Development/Testing Only)
# Generate self-signed certificate
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/nginx/ssl/mop-platform.key \
-out /etc/nginx/ssl/mop-platform.crt \
-subj "/C=US/ST=State/L=City/O=Company/CN=mop-platform.internal"
11. Host Firewall (firewalld)
RHEL 9 uses firewalld as the default firewall manager.
# Verify firewalld is running
sudo systemctl status firewalld
# Allow SSH (should already be enabled by default)
sudo firewall-cmd --permanent --add-service=ssh
# Allow HTTP and HTTPS
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
# Restrict SSH to specific source IPs (optional, recommended)
# Remove the general SSH rule and add a rich rule for bastion/VPN only
# sudo firewall-cmd --permanent --remove-service=ssh
# sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.0.0.0/16" service name="ssh" accept'
# Reload to apply changes
sudo firewall-cmd --reload
# Verify active rules
sudo firewall-cmd --list-all
sudo firewall-cmd --permanent --add-port=3000/tcp --add-port=5000/tcp && sudo firewall-cmd --reload
12. Post-Deployment Validation Checklist
# 1. Verify the backend is running
sudo systemctl status mop-platform
curl -s http://127.0.0.1:5000/ | head -20
# 2. Verify the frontend is running
curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:3000/
# Should return 200
# 3. Verify Nginx is proxying correctly
curl -s -o /dev/null -w "%{http_code}" http://localhost/
# Should return 200
# 4. Verify mount point is accessible
ls -la /mnt/mopsets
# Should list GZ files if any are present
# 5. Verify Ansible can reach targets
source /opt/mop-platform/venv/bin/activate
cd /opt/mop-platform
ansible all -i inventory/azure_regions.yml -m ping
# 6. Check application logs
tail -f /opt/mop-platform/logs/error.log
sudo journalctl -u mop-platform --since "5 minutes ago"
# 7. Verify from browser
# Navigate to https://mop-platform.internal.yourcompany.com
# Check: Dashboard loads, Admin page accessible, Mount status green
Validation Checklist
| Check | How to Verify | Expected Result |
|---|---|---|
| App running | systemctl status mop-platform | Active (running) |
| Web UI loads | Browser: https://<hostname>/ | Dashboard page |
| Admin page | Browser: https://<hostname>/admin | Admin tabs visible |
| Mount accessible | Admin → System Status | Mount Status: green |
| Ansible ping | ansible all -m ping | pong from all hosts |
| GZ import | Admin → Vendor GZ Files → Import from mount | Templates extracted |
| Pre-render | Admin → Vendor GZ Files → Pre-Render button | Templates processed |
| Logs accessible | Browser: https://<hostname>/logs | Log dashboard loads |
13. Ongoing Maintenance
Application Updates
# Pull latest code
cd /opt/mop-platform
git pull origin main
# Update dependencies
source venv/bin/activate
pip install -r requirements.txt
# Restart service
sudo systemctl restart mop-platform
Log Rotation
# Create logrotate config
sudo cat <<'EOF' > /etc/logrotate.d/mop-platform
/opt/mop-platform/logs/*.log {
daily
missingok
rotate 30
compress
delaycompress
notifempty
create 640 azureuser azureuser
postrotate
systemctl reload mop-platform
endscript
}
EOF
Security Maintenance
# Regular OS updates (schedule monthly)
sudo dnf update -y
# Rotate the SESSION_SECRET periodically
export NEW_SECRET=$(python3 -c "import secrets; print(secrets.token_hex(32))")
# Update in /etc/systemd/system/mop-platform.service
sudo systemctl daemon-reload && sudo systemctl restart mop-platform
# Rotate SSH keys annually
ssh-keygen -t rsa -b 4096 -f ~/.ssh/azure_rsa_new -N ""
# Distribute new key to all target hosts, then remove old key
# Review Ansible Vault passwords
# Rotate Azure DevOps PAT tokens before expiry
Built-in Security Hardening
The platform includes the following security measures. These are already applied in the codebase and require no additional configuration:
| Security Measure | What It Does | Files Affected |
|---|---|---|
| XSS Prevention | All user-controlled and API-returned data is sanitized with escapeHtml() before rendering in HTML to prevent cross-site scripting attacks. |
scheduler.html, archive_dashboard.html, mop_detail.html, releases.html, documentation.html, logs_dashboard.html, api_demo.html |
| Playbook Path Validation | The Ansible executor validates all playbook paths before execution, ensuring they are within the allowed playbooks/ directory and have valid file extensions. This prevents path traversal and arbitrary file execution. |
executor.py |
| Vault-Managed Credentials | All sensitive values (API tokens, PAT tokens, Slack webhooks, service principal secrets) use Ansible Vault variable references ({{ vault_* }}) instead of hardcoded values. |
playbooks/*.yml, vars/*.yml, docs/releases/**/*.md |
| Locally Bundled Assets | CSS, JavaScript, and font files (Bootstrap, Font Awesome, CodeMirror) are bundled locally in static/ instead of loaded from external CDNs. This ensures the UI works on restricted corporate networks and eliminates external dependency risks. |
static/css/, static/js/, static/webfonts/, all template files |
| Server-Side Token Handling | Authentication tokens for external services (e.g., wiki API) are handled server-side through configuration, never exposed in client-side JavaScript. | docs_renderer.py, documentation.html |
Disk Space Monitoring
# Check disk usage
df -h /opt/mop-platform
# Clean old rendered MOPs (if disk is getting full)
# Review and remove old versions from rendered/ and mops/
ls -la /opt/mop-platform/rendered/
ls -la /opt/mop-platform/mops/
# Clean old GZ uploads
ls -la /opt/mop-platform/uploads/gz/
14. Troubleshooting
# Check service logs
sudo journalctl -u mop-platform -n 50 --no-pager
# Test manually
cd /opt/mop-platform
source venv/bin/activate
python main.py
# Look for import errors or missing dependencies
# Common fix: missing SESSION_SECRET
export SESSION_SECRET=$(python3 -c "import secrets; print(secrets.token_hex(32))")
# Check if mount is active
mount | grep mopsets
df -h /mnt/mopsets
# Re-mount
sudo mount -a
# Check credentials (Azure Files)
cat /etc/smbcredentials/mopfiles.cred
# Check NSG allows port 445 outbound (Azure Files SMB)
# Check file permissions
ls -la /mnt/mopsets
# Test basic connectivity
ping -c 3 <target-host-ip>
# Test SSH directly
ssh -v -i ~/.ssh/azure_rsa azureuser@<target-ip>
# Check VNet peering is active
az network vnet peering list \
--resource-group $RESOURCE_GROUP \
--vnet-name $VNET_NAME \
-o table
# Check NSG rules allow SSH outbound
az network nsg rule list --nsg-name <nsg-name> -g $RESOURCE_GROUP -o table
# Verify Ansible inventory
ansible-inventory -i inventory/azure_regions.yml --list
# Gunicorn may not be running
sudo systemctl status mop-platform
# Check if ports 3000 and 5000 are listening
ss -tlnp | grep -E '3000|5000'
# Check Nginx error log
sudo tail -20 /var/log/nginx/error.log
# Restart both services
sudo systemctl restart mop-platform
sudo systemctl restart nginx
# Check Nginx client_max_body_size
grep client_max_body_size /etc/nginx/conf.d/mop-platform.conf
# Should be: client_max_body_size 500M;
# Check Gunicorn timeout
grep timeout /etc/systemd/system/mop-platform.service
# Should be: --timeout 300
# Check disk space
df -h /opt/mop-platform/uploads/gz/
# Check upload directory permissions
ls -la /opt/mop-platform/uploads/gz/
Support Contacts
Application issues, configuration, MOP processing
VNet peering, NSG rules, connectivity
VM provisioning, Azure Files, DNS
Certificates, PAT tokens, SSH keys