🚀 Python for DevOps – API Monitoring with requests Practiced using Python’s requests library to check API health, a common real-world DevOps task. 📂 Use Case: In production, services depend on APIs. We need to continuously verify if APIs are reachable and healthy. 💻 Python Script: import requests url = "https://api.github.com" try: res = requests.get(url, timeout=5) if res.status_code == 200: print("✅ GitHub API is UP") else: print("⚠️ GitHub API issue:", res.status_code) except requests.exceptions.RequestException as e: print("🚨 API call failed:", e) Output: Status_code: 200 Response: {'current_user_url': 'https://lnkd.in/guvkNT7k', 'current_user_authorizations_html_url': 'https://lnkd.in/gx-65ERd', 'authorizations_url': 'https://lnkd.in/gzcehbTu', 'code_search_url': 'https://lnkd.in/gQU8cghE', 'commit_search_url': 'https://lnkd.in/g62A-__n', 'emails_url': 'https://lnkd.in/gXaZyEkK', 'emojis_url': 'https://lnkd.in/gp3Scn2Y', 'events_url': 'https://lnkd.in/grbt4NNg', 'feeds_url': 'https://lnkd.in/gCBk-eSN', 'followers_url': 'https://lnkd.in/gQvSEXqB', 'following_url': 'https://lnkd.in/grh4YDpJ', 🔍 What this does: Sends HTTP request to API Uses timeout to avoid hanging Checks response status Handles failures gracefully 🔥 Why this matters in DevOps: Monitor service availability Validate endpoints in CI/CD pipelines Detect outages early Automate health checks 💡 Key Learning: APIs are everywhere in DevOps, and Python makes it easy to integrate, monitor, and automate systems. 📈 Next Steps: Send alerts (Slack/Email) if API fails Combine with log monitoring scripts Build a full monitoring + alerting system #Python #DevOps #API #Automation #Monitoring #Scripting #Cloud #Learning #100DaysOfCode
Python API Monitoring with requests
More Relevant Posts
-
🚀 Python for DevOps – Log Level Automation Project Today I built a practical DevOps script using Python to analyze logs and separate them based on log levels. 📂 Problem: Manually checking logs is time-consuming. Needed a way to automatically filter and organize logs. 💻 Solution (Python Script): with open("app.log") as f, \ open("error.log", "w") as err, \ open("warning.log", "w") as warn, \ open("info.log", "w") as info: for line in f: if "ERROR" in line: err.write(line) elif "WARNING" in line: warn.write(line) elif "INFO" in line: info.write(line) ####################################### Output: ubuntu@satheesha:~/python$ python3 multiple-log_level.py ubuntu@satheesha:~/python$ ls -ltr error.log warning.log info.log -rw-r--r-- 1 ubuntu ubuntu 18 Apr 21 07:45 warning.log -rw-r--r-- 1 ubuntu ubuntu 44 Apr 21 07:45 info.log -rw-r--r-- 1 ubuntu ubuntu 17 Apr 21 07:45 error.log ubuntu@satheesha:~/python$ cat app.log INFO: Service startes WARNING: High CPU INFO: Service startes ERROR: Disk full ubuntu@satheesha:~/python$ cat error.log ERROR: Disk full ubuntu@satheesha:~/python$ cat warning.log WARNING: High CPU ubuntu@satheesha:~/python$ cat info.log INFO: Service startes INFO: Service startes ######################################### does: Reads app.log Filters logs into: error.log warning.log info.log 📊 Outcome: Faster troubleshooting Organized logs for better monitoring Reduced manual effort 🔥 Real DevOps Use Cases: Production log monitoring CI/CD pipeline validation Incident detection and alerting 💡 Key Learning: Python is a powerful tool for automation in DevOps, especially for handling logs and system data. 📈 Next Step: Enhancing this script to: Count log levels Trigger alerts (email/Slack) Monitor logs in real-time (tail -f style) #Python #DevOps #Automation #Scripting #Cloud #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Python for DevOps – Log Analysis with Metrics Today I enhanced my log automation script to not only separate logs by level but also count them for quick insights. 📂 Problem: Manually analyzing logs is slow and inefficient. 💻 Solution (Python Script): error = warning = info_count = 0 with open("app.log") as f, \ open("error.log", "w") as err, \ open("warning.log", "w") as warn, \ open("info.log", "w") as info: for line in f: if "ERROR" in line: err.write(line) error += 1 elif "WARNING" in line: warn.write(line) warning += 1 elif "INFO" in line: info.write(line) info_count += 1 print("ERROR:", error) print("WARNING:", warning) print("INFO:", info_count) 🔍 What this does: Reads app.log Splits logs into separate files Counts each log level 📊 Sample Output: ERROR: 1 WARNING: 1 INFO: 2 Output: ubuntu@satheesha:~/python$ python3 multiple-log_level.py ERROR: 1 WARNING: 1 INFO_COUNT: 2 ubuntu@satheesha:~/python$ ls -ltr error.log warning.log info.log -rw-r--r-- 1 ubuntu ubuntu 18 Apr 21 08:20 warning.log -rw-r--r-- 1 ubuntu ubuntu 44 Apr 21 08:20 info.log -rw-r--r-- 1 ubuntu ubuntu 17 Apr 21 08:20 error.log ubuntu@satheesha:~/python$ cat error.log ERROR: Disk full ubuntu@satheesha:~/python$ cat warning.log WARNING: High CPU ubuntu@satheesha:~/python$ cat info.log INFO: Service startes INFO: Service startes 🔥 Why this matters: Quick visibility into system health Helps prioritize issues (ERROR > WARNING > INFO) Reduces manual troubleshooting time 💡 Key Learning: Python can be used not just for automation, but also for real-time insights and monitoring in DevOps. 📈 Next Step: Add alerting when ERROR count exceeds threshold Integrate with monitoring tools Build real-time log monitoring (tail -f in Python) #Python #DevOps #Automation #Scripting #Monitoring #Cloud #Learning
To view or add a comment, sign in
-
BLOG 03 OF 19 · PYTHON Python for Cloud & DevOps: The Glue That Holds Everything Together By Aditya Girish Padhye · ~5 min read Topic: Python | Series: Cloud & DevOps Learning Journey Python isn't just a programming language in the DevOps world — it's the universal glue. It connects tools, automates workflows, talks to APIs, processes logs, and drives infrastructure. If Linux is the OS of the cloud, Python is its scripting language. “Every cloud engineer I admire writes Python. Not because it's trendy — because it genuinely solves problems faster than anything else.” Why Python Specifically? AWS Lambda runs Python natively. Boto3, the AWS SDK, is a Python library. Ansible playbooks extend with Python. The cloud ecosystem has quietly made Python its first-class citizen, and learning it opens all those doors. Core Language Fundamentals • Data Types & Collections: Lists, dictionaries, sets, and tuples are the building blocks. In DevOps scripts, you're constantly manipulating JSON responses from APIs — and JSON maps directly to Python dicts. • Control Flow: Decision making with if/elif, iteration with for/while loops — what makes automation scripts intelligent rather than just sequential. • Functions & OOPs: Writing reusable functions and understanding classes makes your scripts maintainable. Modular code is the difference between a tool and a mess. • File & Exception Handling: Reading config files, writing logs, handling errors gracefully — production scripts must not crash silently. Advanced Capabilities: Django, Flask & API Handling Flask is lightweight and perfect for building simple REST APIs or internal tooling dashboards. Django provides a full-featured framework for more complex applications. In a DevOps context, Flask is commonly used to build webhook receivers, monitoring endpoints, or automation trigger APIs. • API Handling: Using the requests library to call REST APIs, parse JSON responses, handle auth tokens — a daily DevOps activity. • Advanced Libraries: boto3 for AWS automation, paramiko for SSH automation, subprocess for running shell commands, schedule for job automation. A Real Automation Example One of the most satisfying scripts I wrote used boto3 to automatically tag untagged EC2 instances with their owner and creation date — pulling data from CloudTrail events. That script ran in Lambda on a schedule and eliminated a manual compliance task entirely. That's the power of Python in the cloud: turn a repetitive human task into a zero-maintenance automated process. Start with the basics, build a few automation scripts, then connect them to real AWS services via boto3. The compounding effect on your productivity is remarkable. #Python #DevOps #CloudAutomation #Boto3 #Flask #Django #AWSLambda #CloudEngineer #LearningInPublic #FortuneCloud Aditya Girish Padhye · AWS Cloud & DevOps Engineer ·
To view or add a comment, sign in
-
🚀 Python Basics for DevOps Engineers (With Practical Examples) Python is a must-have skill for DevOps. Here are some basic concepts with real-time examples 👇 🔹 1. Variables name = "server1" cpu_usage = 75 is_running = True print(name) print(cpu_usage) 💡 DevOps Example: server = "web-server" status = "running" print(server, status) 🔹 2. Data Types String → "hello" Integer → 10 Boolean → True / False List → ["app1", "app2"] services = ["nginx", "docker", "jenkins"] print(services[0]) # nginx 🔹 3. Conditions (if-else) Used for decision-making in automation cpu = 85 if cpu > 80: print("High CPU usage") else: print("Normal CPU") 💡 DevOps Example: disk = 90 if disk > 80: print("Alert: Disk almost full") else: print("Disk is normal") 🔹 4. Loops Used to repeat tasks (like checking multiple servers) 👉 for loop: servers = ["web1", "web2", "web3"] for s in servers: print(s) 👉 while loop: i = 1 while i <= 5: print(i) i += 1 🔹 5. Functions Reusable code (very important for scripting) def check_cpu(cpu): if cpu > 80: print("Alert: High CPU") else: print("Normal CPU") check_cpu(85) 🔹 6. Real DevOps Example servers = ["web1", "web2", "web3"] def check_status(server): print(f"Checking {server}...") for s in servers: check_status(s) 🔹 7. Mini Practice cpu = 70 if cpu > 80: print("Alert: scale up server") else: print("Server is stable") 💡 Key Takeaway: Python helps automate repetitive tasks like monitoring, alerts, and server management in DevOps. #DevOps #Python #Automation #Scripting #Learning #AWS #Kubernetes
To view or add a comment, sign in
-
🚀 Python for DevOps – Triggering Jenkins Jobs via API Explored how to automate CI/CD by triggering Jenkins jobs using Python and APIs — a key real-world DevOps capability. 📂 Use Case: Instead of manually clicking “Build Now” in Jenkins, we can trigger jobs programmatically using APIs. 💻 Python Script: import requests jenkins_url = "http://your-jenkins-url/job/your-job-name/build" username = "your-username" api_token = "your-api-token" response = requests.post(jenkins_url, auth=(username, api_token)) if response.status_code == 201: print("✅ Jenkins job triggered successfully") else: print("❌ Failed to trigger job:", response.status_code) 🔐 Handling CSRF (Crumb Token): crumb_url = f"{jenkins_url}/crumbIssuer/api/json" crumb_data = requests.get(crumb_url, auth=(username, api_token)).json() headers = { crumb_data['crumbRequestField']: crumb_data['crumb'] } ⚙️ Trigger Job with Parameters: params = { "ENV": "prod", "VERSION": "1.0" } response = requests.post( "http://your-jenkins-url/job/your-job-name/buildWithParameters", params=params, auth=("user", "token") ) 🔍 What this enables: Automate CI/CD pipelines Trigger builds from scripts or monitoring tools Integrate Jenkins with other systems Reduce manual intervention 🔥 Why this matters in DevOps: Automation is the backbone of DevOps. Using APIs, we can connect tools and build end-to-end automated workflows. 💡 Key Learning: Jenkins + APIs + Python = powerful combination for pipeline automation and integration. 📈 Next Steps: Trigger Jenkins from log monitoring script Send build status to Slack Integrate with cloud deployments (AWS) #DevOps #Jenkins #Python #Automation #CICD #API #Cloud #Scripting #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Python for DevOps: Real-Time System Monitoring Script (CPU + Memory + Disk) One of the most practical skills in DevOps is automating system monitoring. Instead of manually checking servers, I built a simple Python script that: ✅ Monitors CPU usage ✅ Tracks Memory consumption ✅ Checks Disk utilization ✅ Triggers alerts when thresholds are exceeded 💻 Full Script import psutil import shutil # Thresholds CPU_THRESHOLD = 80 MEM_THRESHOLD = 80 DISK_THRESHOLD = 80 def check_cpu(): cpu = psutil.cpu_percent(interval=1) if cpu > CPU_THRESHOLD: print(f"ALERT: High CPU usage: {cpu}%") else: print(f"OK: CPU usage: {cpu}%") def check_memory(): mem = psutil.virtual_memory() usage = mem.percent if usage > MEM_THRESHOLD: print(f"ALERT: High Memory usage: {usage}%") else: print(f"OK: Memory usage: {usage}%") def check_disk(): disk = shutil.disk_usage("/") usage = (disk.used / disk.total) * 100 if usage > DISK_THRESHOLD: print(f"ALERT: High Disk usage: {usage:.2f}%") else: print(f"OK: Disk usage: {usage:.2f}%") def main(): print("===== System Monitoring =====") check_cpu() check_memory() check_disk() if __name__ == "__main__": main() Output: ubuntu@satheesha:~/python$ python3 full-monitor-script.py ===== System Monitoring ===== OK: CPU usage: 1.0% OK: Memory usage: 30.5% OK: Disk usage: 2.74% ⚙️ How I Used It Installed dependency using: sudo apt install python3-psutil Ran the script to get real-time system health Can be scheduled using cron for continuous monitoring 🔥 Why This Matters in DevOps 👉 Helps detect issues before outages 👉 Reduces manual effort 👉 Can be extended to send alerts (Email / Slack / SNS) 👉 Foundation for tools like monitoring agents 🎯 Key Learning "Don’t just run commands like top or df -h — automate them using Python and build intelligent monitoring." 🚀 Next Steps I’m planning to: Integrate this with Jenkins pipeline Send alerts to Slack Push metrics to monitoring tools 💬 How do you monitor your servers in real-time? #DevOps #Python #Automation #Monitoring #SRE #Cloud #Linux #Jenkins #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Python for DevOps – Log Monitoring with Timestamp & Alerts (Mini Project) Built a hands-on Python script to analyze logs, generate alerts, and track system health — a small step toward real-world DevOps automation. 📂 Problem: Manually scanning logs is inefficient and error-prone. Needed a way to automatically filter and track critical issues. 💻 Solution (Python Script): from datetime import datetime ERROR_COUNT = 0 WARNING_COUNT = 0 INFO_COUNT = 0 with open("app.log") as f, open("alerts.log", "a") as alert_file: for line in f: timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") if "ERROR" in line: ERROR_COUNT += 1 alert_file.write(f"{timestamp} - {line.strip()}\n") elif "WARNING" in line: WARNING_COUNT += 1 alert_file.write(f"{timestamp} - {line.strip()}\n") elif "INFO" in line: INFO_COUNT += 1 print("============ LOG SUMMARY ============") print("ERROR:", ERROR_COUNT) print("WARNING:", WARNING_COUNT) print("INFO:", INFO_COUNT) Output: ubuntu@satheesha:~/python$ python3 log-mon_alert-time.py ============LOG SUMMARY================ ERROR: 1 WARNING: 1 INFO: 2 ubuntu@satheesha:~/python$ cat alerts.log 2026-04-21 11:37:1776771454 - INFO - INFO: Service startes 2026-04-21 11:37:1776771454 - WARNING - WARNING: High CPU 2026-04-21 11:37:1776771454 - INFO - INFO: Service startes 2026-04-21 11:37:1776771454 - ERROR - ERROR: Disk full 2026-04-21 11:45:59 - INFO - INFO: Service startes 2026-04-21 11:45:59 - WARNING - WARNING: High CPU 2026-04-21 11:45:59 - INFO - INFO: Service startes 2026-04-21 11:45:59 - ERROR - ERROR: Disk full 🔍 What this script does: Reads application logs (app.log) Filters critical log levels (ERROR / WARNING / INFO) Appends important alerts into alerts.log Adds timestamps for better traceability Generates summary metrics for quick insights 📊 Why this matters: Faster troubleshooting in production Clear visibility into system health Reduces manual effort in log analysis 🔥 Key Learning: Python is a powerful tool in DevOps—not just for scripting, but for automation, monitoring, and observability. 📈 Next Steps: Add alerting (Email / Slack integration) Convert logs to structured format (JSON for ELK stack) Build real-time log monitoring (tail -f style) #Python #DevOps #Automation #Logging #Monitoring #Cloud #Scripting #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Why YAML Validation with Python Matters in the DevOps World YAML has become the backbone of modern DevOps workflows. From CI/CD pipelines to infrastructure-as-code and Kubernetes manifests, YAML keeps our configurations clean, readable, and structured. But here’s the catch — a single indentation mistake can break an entire deployment. That’s where Python-based YAML validation becomes a game changer. 🔎 Why Validate YAML with Python? ✅ Early Error Detection Catch syntax and structure issues before they reach production. No more failed deployments due to simple spacing or formatting mistakes. ✅ Automation Friendly Python scripts can be easily integrated into CI/CD pipelines to automatically validate YAML files on every commit or pull request. ✅ Custom Validation Rules Beyond syntax checks, Python allows you to enforce business logic — such as required fields, allowed values, or environment-specific configurations. ✅ Improved Reliability Validated YAML means more stable pipelines, fewer rollbacks, and higher confidence in automated deployments. 🛠️ Why This Matters in DevOps In the DevOps ecosystem, YAML powers tools like deployment pipelines, container orchestration, monitoring setups, and infrastructure definitions. Using Python as a validation layer ensures: Cleaner configuration management Safer deployments Faster debugging cycles Better collaboration across teams 💡 Takeaway: Treat YAML validation as a first-class step in your DevOps workflow. A lightweight Python validator today can save hours of troubleshooting tomorrow. GITHUB: https://lnkd.in/dTDADrzS #DevOps #Python #YAML #Automation #InfrastructureAsCode #CICD #SRE #CloudComputing
To view or add a comment, sign in
-
-
🚀 Python for DevOps – Stop Learning, Start Automating Most people learn Python… But very few use it the right way in DevOps. Here’s the truth 👇 👉 You don’t need deep theory. 👉 You need practical automation skills. 🔹 What to Focus On (DevOps Style) ✔ Variables, loops, conditions ✔ Functions ✔ File handling (logs, configs) ✔ Error handling (try/except) ✔ Key modules: os → system operations subprocess → run shell commands json / yaml → config management 🔧 Real Example: Run Linux Command Using Python import subprocess result = subprocess.run(["df", "-h", "/"], capture_output=True, text=True) lines = result.stdout.strip().split("\n") if len(lines) > 1: parts = lines[1].split() usage = int(parts[4].replace("%", "")) if usage > 80: print(f"🚨 ALERT: Disk usage is {usage}%") else: print(f"✅ OK: Disk usage is {usage}%") else: print("❌ Unexpected output:", result.stdout) Output: ubuntu@satheesha:~/python$ python3 Disk-Usage.py ✅ OK: Disk usage is 3% 💡 What This Shows ✔ You can interact with the OS ✔ You can parse real-time system data ✔ You can build automation scripts ✔ You think like a DevOps Engineer 🎯 How I Explain This in Interviews “I use Python’s subprocess module to execute system commands, parse outputs, and automate monitoring tasks like disk usage alerts.” 🔥 Pro Tip Take it one step further: Send alerts to Slack/Email 📩 Schedule with cron ⏰ Integrate with Jenkins 🔁 💬 If you're learning DevOps, stop just writing scripts… Start solving real problems. #DevOps #Python #Automation #Linux #SRE #Cloud #Jenkins #Learning
To view or add a comment, sign in
-
🚀 Learning Python for DevOps – Hands-on Practice with Code Today I worked on a practical DevOps task: Log File Analysis using Python Started with a sample app.log: ERROR: Disk full WARNING: High CPU usage INFO: Service started INFO: Health check passed 🔍 Step 1: Read log file with open("app.log", "r") as f: data = f.read() print(data) ⚠️ Step 2: Filter only ERROR logs with open("app.log", "r") as f: for line in f: if "ERROR" in line: print(line) 📊 Step 3: Count total errors error_count = 0 with open("app.log") as f: for line in f: if "ERROR" in line: error_count += 1 print("Total Errors:", error_count) 🧠 Step 4: Handle multiple log levels (Real-world scenario) error = 0 warning = 0 info = 0 with open("app.log") as f: for line in f: if "ERROR" in line: error += 1 elif "WARNING" in line: warning += 1 elif "INFO" in line: info += 1 print("ERROR:", error) print("WARNING:", warning) print("INFO:", info) 🚨 Step 5: DevOps-style alert output with open("app.log") as f: for line in f: if "ERROR" in line: print("ALERT:", line.strip()) 💡 Key Learning: Python is a powerful tool in DevOps for log monitoring, automation, and faster troubleshooting. 🔥 Even simple scripts like this can help in: Production monitoring CI/CD pipelines Incident detection 📈 Next Goal: Build a real-time log monitoring script (like tail -f) using Python #Python #DevOps #Automation #Scripting #Learning #Cloud #100DaysOfCode
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development