Zero-Cost Website Monitoring, Creating Your Own Bash Script Solution

engineering November 20, 2024
Zero-Cost Website Monitoring, Creating Your Own Bash Script Solution Rez Moss

Rez Moss

@rezmos1

Website monitoring doesn't have to be expensive or complex. With basic Linux tools and bash scripting, you can create a robust monitoring solution that costs nothing beyond your existing infrastructure. This guide will walk you through creating a comprehensive website monitoring system using bash scripts.

Prerequisites

  • Linux server with bash shell
  • Basic command-line tools (curl, mailx)
  • Basic understanding of cron jobs
  • SSH access to your server

Part 1: Basic HTTP Status Monitoring

Setting Up the Base Script

#!/bin/bash

# Configuration
WEBSITES=(
    "https://example.com"
    "https://example.org"
    "https://example.net"
)
LOG_FILE="/var/log/website_monitor.log"
ALERT_EMAIL="admin@example.com"

# Create log directory if it doesn't exist
mkdir -p "$(dirname $LOG_FILE)"

# Timestamp function
timestamp() {
    date '+%Y-%m-%d %H:%M:%S'
}

# Check website function
check_website() {
    local site=$1
    local response_code=$(curl -s -o /dev/null -w "%{http_code}" "$site")
    local response_time=$(curl -s -o /dev/null -w "%{time_total}" "$site")

    echo "$(timestamp) | $site | Status: $response_code | Response Time: ${response_time}s" >> "$LOG_FILE"

    if [ "$response_code" != "200" ]; then
        send_alert "$site" "$response_code" "$response_time"
    fi
}

# Alert function
send_alert() {
    local site=$1
    local code=$2
    local time=$3
    echo "Alert: $site is down (Status Code: $code, Response Time: ${time}s)" | \
        mail -s "Website Down Alert - $site" "$ALERT_EMAIL"
}

# Main loop
for site in "${WEBSITES[@]}"; do
    check_website "$site"
done

Part 2: Advanced Features Implementation

Adding SSL Certificate Monitoring

check_ssl_expiry() {
    local domain=$(echo "$1" | sed -e 's|^https://||' -e 's|/.*$||')
    local expiry_date=$(openssl s_client -connect "$domain":443 -servername "$domain" \
        2>/dev/null | openssl x509 -noout -enddate | cut -d= -f2)
    local expiry_epoch=$(date -d "$expiry_date" +%s)
    local current_epoch=$(date +%s)
    local days_until_expiry=$(( ($expiry_epoch - $current_epoch) / 86400 ))

    if [ $days_until_expiry -lt 30 ]; then
        echo "$(timestamp) | $domain | SSL Certificate expires in $days_until_expiry days" >> "$LOG_FILE"
        send_ssl_alert "$domain" "$days_until_expiry"
    fi
}

Adding Response Time Monitoring

check_response_time() {
    local site=$1
    local threshold=2.0  # Maximum acceptable response time in seconds

    local response_time=$(curl -s -o /dev/null -w "%{time_total}" "$site")

    if (( $(echo "$response_time > $threshold" | bc -l) )); then
        echo "$(timestamp) | $site | Slow Response: ${response_time}s" >> "$LOG_FILE"
        send_performance_alert "$site" "$response_time"
    fi
}

Adding Content Verification

check_content() {
    local site=$1
    local expected_content="Welcome"  # Change this to your expected content

    local content=$(curl -s "$site")
    if ! echo "$content" | grep -q "$expected_content"; then
        echo "$(timestamp) | $site | Content check failed" >> "$LOG_FILE"
        send_content_alert "$site"
    fi
}

Part 3: Data Collection and Reporting

Adding Performance Metrics Storage

store_metrics() {
    local site=$1
    local response_time=$2
    local status_code=$3

    # Store in CSV format for easy processing
    echo "$(timestamp),$site,$status_code,$response_time" >> "/var/log/website_metrics.csv"
}

# Generate daily report
generate_report() {
    local report_file="/var/log/website_report.txt"
    local date=$(date '+%Y-%m-%d')

    {
        echo "Website Monitoring Report - $date"
        echo "================================="
        echo
        echo "Availability Summary:"
        awk -F',' '{print $2,$3}' "/var/log/website_metrics.csv" | \
            sort | uniq -c | sort -nr

        echo
        echo "Response Time Summary:"
        awk -F',' '{print $2,$4}' "/var/log/website_metrics.csv" | \
            awk '{sum+=$2; count++} END {print "Average:",sum/count,"seconds"}'
    } > "$report_file"

    mail -s "Daily Website Monitoring Report - $date" "$ALERT_EMAIL" < "$report_file"
}

Part 4: Setting Up Automation

Cron Job Configuration

# Add to /etc/crontab:
*/5 * * * * root /path/to/monitor.sh  # Check every 5 minutes
0 0 * * * root /path/to/generate_report.sh  # Daily report

Log Rotation Configuration

# Add to /etc/logrotate.d/website-monitor
/var/log/website_monitor.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    create 644 root root
}

Part 5: Advanced Features

Adding Webhook Notifications

send_webhook_alert() {
    local site=$1
    local status=$2
    local webhook_url="https://your-webhook-url"

    curl -H "Content-Type: application/json" \
         -d "{\"site\":\"$site\",\"status\":\"$status\",\"timestamp\":\"$(timestamp)\"}" \
         "$webhook_url"
}

Adding Network Latency Monitoring

check_latency() {
    local domain=$(echo "$1" | sed -e 's|^https://||' -e 's|/.*$||')
    local ping_result=$(ping -c 3 "$domain" | tail -1 | awk '{print $4}' | cut -d '/' -f 2)

    if (( $(echo "$ping_result > 100" | bc -l) )); then  # Alert if average ping > 100ms
        echo "$(timestamp) | $domain | High latency: ${ping_result}ms" >> "$LOG_FILE"
        send_latency_alert "$domain" "$ping_result"
    fi
}

Implementation Best Practices

  1. Error Handling
  2. Always validate input parameters
  3. Use proper exit codes
  4. Implement timeout mechanisms
  5. Handle edge cases gracefully

  6. Security Considerations

  7. Run scripts with minimal required permissions
  8. Sanitize all inputs
  9. Protect sensitive information
  10. Use secure communication channels

  11. Maintenance

  12. Regularly review and update scripts
  13. Monitor log file sizes
  14. Update SSL certificate checks
  15. Review alert thresholds

  16. Scalability

  17. Use configuration files for easy updates
  18. Implement parallel processing for multiple sites
  19. Consider using temporary files for large datasets
  20. Optimize database operations

This zero-cost monitoring solution provides robust website monitoring capabilities without requiring expensive third-party services. By implementing these scripts and following the best practices, you can create a reliable monitoring system that meets your needs while maintaining complete control over your monitoring infrastructure.

Remember to: - Regular test all components - Keep scripts updated - Monitor log file growth - Review and adjust thresholds as needed - Document any customizations - Maintain backup copies of configurations

Want to read more?

Back to Blog