🚀 Python for DevOps – Triggering Jenkins Jobs via API Explored how to automate CI/CD by triggering Jenkins jobs using Python and APIs — a key real-world DevOps capability. 📂 Use Case: Instead of manually clicking “Build Now” in Jenkins, we can trigger jobs programmatically using APIs. 💻 Python Script: import requests jenkins_url = "http://your-jenkins-url/job/your-job-name/build" username = "your-username" api_token = "your-api-token" response = requests.post(jenkins_url, auth=(username, api_token)) if response.status_code == 201: print("✅ Jenkins job triggered successfully") else: print("❌ Failed to trigger job:", response.status_code) 🔐 Handling CSRF (Crumb Token): crumb_url = f"{jenkins_url}/crumbIssuer/api/json" crumb_data = requests.get(crumb_url, auth=(username, api_token)).json() headers = { crumb_data['crumbRequestField']: crumb_data['crumb'] } ⚙️ Trigger Job with Parameters: params = { "ENV": "prod", "VERSION": "1.0" } response = requests.post( "http://your-jenkins-url/job/your-job-name/buildWithParameters", params=params, auth=("user", "token") ) 🔍 What this enables: Automate CI/CD pipelines Trigger builds from scripts or monitoring tools Integrate Jenkins with other systems Reduce manual intervention 🔥 Why this matters in DevOps: Automation is the backbone of DevOps. Using APIs, we can connect tools and build end-to-end automated workflows. 💡 Key Learning: Jenkins + APIs + Python = powerful combination for pipeline automation and integration. 📈 Next Steps: Trigger Jenkins from log monitoring script Send build status to Slack Integrate with cloud deployments (AWS) #DevOps #Jenkins #Python #Automation #CICD #API #Cloud #Scripting #Learning #100DaysOfCode
Trigger Jenkins Jobs with Python and API
More Relevant Posts
-
🚀 Python Basics for DevOps Engineers (With Practical Examples) Python is a must-have skill for DevOps. Here are some basic concepts with real-time examples 👇 🔹 1. Variables name = "server1" cpu_usage = 75 is_running = True print(name) print(cpu_usage) 💡 DevOps Example: server = "web-server" status = "running" print(server, status) 🔹 2. Data Types String → "hello" Integer → 10 Boolean → True / False List → ["app1", "app2"] services = ["nginx", "docker", "jenkins"] print(services[0]) # nginx 🔹 3. Conditions (if-else) Used for decision-making in automation cpu = 85 if cpu > 80: print("High CPU usage") else: print("Normal CPU") 💡 DevOps Example: disk = 90 if disk > 80: print("Alert: Disk almost full") else: print("Disk is normal") 🔹 4. Loops Used to repeat tasks (like checking multiple servers) 👉 for loop: servers = ["web1", "web2", "web3"] for s in servers: print(s) 👉 while loop: i = 1 while i <= 5: print(i) i += 1 🔹 5. Functions Reusable code (very important for scripting) def check_cpu(cpu): if cpu > 80: print("Alert: High CPU") else: print("Normal CPU") check_cpu(85) 🔹 6. Real DevOps Example servers = ["web1", "web2", "web3"] def check_status(server): print(f"Checking {server}...") for s in servers: check_status(s) 🔹 7. Mini Practice cpu = 70 if cpu > 80: print("Alert: scale up server") else: print("Server is stable") 💡 Key Takeaway: Python helps automate repetitive tasks like monitoring, alerts, and server management in DevOps. #DevOps #Python #Automation #Scripting #Learning #AWS #Kubernetes
To view or add a comment, sign in
-
🚀 Python for DevOps – Stop Learning, Start Automating Most people learn Python… But very few use it the right way in DevOps. Here’s the truth 👇 👉 You don’t need deep theory. 👉 You need practical automation skills. 🔹 What to Focus On (DevOps Style) ✔ Variables, loops, conditions ✔ Functions ✔ File handling (logs, configs) ✔ Error handling (try/except) ✔ Key modules: os → system operations subprocess → run shell commands json / yaml → config management 🔧 Real Example: Run Linux Command Using Python import subprocess result = subprocess.run(["df", "-h", "/"], capture_output=True, text=True) lines = result.stdout.strip().split("\n") if len(lines) > 1: parts = lines[1].split() usage = int(parts[4].replace("%", "")) if usage > 80: print(f"🚨 ALERT: Disk usage is {usage}%") else: print(f"✅ OK: Disk usage is {usage}%") else: print("❌ Unexpected output:", result.stdout) Output: ubuntu@satheesha:~/python$ python3 Disk-Usage.py ✅ OK: Disk usage is 3% 💡 What This Shows ✔ You can interact with the OS ✔ You can parse real-time system data ✔ You can build automation scripts ✔ You think like a DevOps Engineer 🎯 How I Explain This in Interviews “I use Python’s subprocess module to execute system commands, parse outputs, and automate monitoring tasks like disk usage alerts.” 🔥 Pro Tip Take it one step further: Send alerts to Slack/Email 📩 Schedule with cron ⏰ Integrate with Jenkins 🔁 💬 If you're learning DevOps, stop just writing scripts… Start solving real problems. #DevOps #Python #Automation #Linux #SRE #Cloud #Jenkins #Learning
To view or add a comment, sign in
-
“How much Python should a DevOps Engineer know?” 🤔 After working through different DevOps tasks, one thing became clear: It’s not about how much Python you know… It’s about how effectively you can use it in real scenarios. Here’s how I see it 👇 💡 Core Skills (Non-negotiable) ✔ Writing clean scripts ✔ Working with files & logs ✔ Handling errors properly 👉 This is the foundation for automation 💡 Practical DevOps Usage ✔ Calling APIs (cloud / tools) ✔ Parsing JSON & YAML ✔ Automating workflows 👉 This is where Python becomes powerful 💡 Advanced Usage (Context-driven) ✔ Building CLI tools ✔ Writing reusable modules ✔ Optimizing scripts for scale 👉 Needed when you're solving larger problems ⚡ Key Insight: In DevOps, Python is not a goal… 👉 It’s a tool to automate, integrate, and scale systems 🚀 For hands-on practice, I found this repo really useful: Check out Abhishek Veeramalla's work 🫡 : 👉 https://lnkd.in/dTqaK8fQ 🧠 Final Thought: Strong DevOps engineers don’t just “know Python”… They use it to eliminate manual work and improve systems How are you using Python in your DevOps workflow? #DevOps #Python #Automation #Cloud #Learning #Engineering
To view or add a comment, sign in
-
🚨 I used to overcomplicate Python in DevOps… until real CI/CD pipelines taught me something simple. When I started working with automation, I thought I needed heavy frameworks and advanced Python structures to build “real DevOps scripts”. But in production environments, I realized something very different: 👉 DevOps automation is not about complexity 👉 It’s about using the right simple tools reliably In most CI/CD and cloud automation work, I ended up using only a small set of Python standard library modules: os → environment variables, system interaction subprocess → running real commands (docker, kubectl, terraform) json → APIs, Kubernetes configs, pipeline responses logging → production-grade observability pathlib → clean file and artifact handling datetime → deployment tracking & audit logs sys → CLI control and pipeline exit handling shutil → backups and artifact management Real example from DevOps work: Instead of building complex tools, I often use Python scripts to: automate deployment steps execute validation commands capture logs from CI/CD pipelines interact with cloud APIs The biggest lesson I learned: 👉 In DevOps, simplicity always wins over complexity. Because in production, reliability matters more than clever code. What Python modules do you find yourself using the most in DevOps automation? #DevOps #Python #CloudComputing #CI/CD #Automation #SRE
To view or add a comment, sign in
-
🚀 Python for DevOps – API Monitoring with requests Practiced using Python’s requests library to check API health, a common real-world DevOps task. 📂 Use Case: In production, services depend on APIs. We need to continuously verify if APIs are reachable and healthy. 💻 Python Script: import requests url = "https://api.github.com" try: res = requests.get(url, timeout=5) if res.status_code == 200: print("✅ GitHub API is UP") else: print("⚠️ GitHub API issue:", res.status_code) except requests.exceptions.RequestException as e: print("🚨 API call failed:", e) Output: Status_code: 200 Response: {'current_user_url': 'https://lnkd.in/guvkNT7k', 'current_user_authorizations_html_url': 'https://lnkd.in/gx-65ERd', 'authorizations_url': 'https://lnkd.in/gzcehbTu', 'code_search_url': 'https://lnkd.in/gQU8cghE', 'commit_search_url': 'https://lnkd.in/g62A-__n', 'emails_url': 'https://lnkd.in/gXaZyEkK', 'emojis_url': 'https://lnkd.in/gp3Scn2Y', 'events_url': 'https://lnkd.in/grbt4NNg', 'feeds_url': 'https://lnkd.in/gCBk-eSN', 'followers_url': 'https://lnkd.in/gQvSEXqB', 'following_url': 'https://lnkd.in/grh4YDpJ', 🔍 What this does: Sends HTTP request to API Uses timeout to avoid hanging Checks response status Handles failures gracefully 🔥 Why this matters in DevOps: Monitor service availability Validate endpoints in CI/CD pipelines Detect outages early Automate health checks 💡 Key Learning: APIs are everywhere in DevOps, and Python makes it easy to integrate, monitor, and automate systems. 📈 Next Steps: Send alerts (Slack/Email) if API fails Combine with log monitoring scripts Build a full monitoring + alerting system #Python #DevOps #API #Automation #Monitoring #Scripting #Cloud #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Python for DevOps – Log Level Automation Project Today I built a practical DevOps script using Python to analyze logs and separate them based on log levels. 📂 Problem: Manually checking logs is time-consuming. Needed a way to automatically filter and organize logs. 💻 Solution (Python Script): with open("app.log") as f, \ open("error.log", "w") as err, \ open("warning.log", "w") as warn, \ open("info.log", "w") as info: for line in f: if "ERROR" in line: err.write(line) elif "WARNING" in line: warn.write(line) elif "INFO" in line: info.write(line) ####################################### Output: ubuntu@satheesha:~/python$ python3 multiple-log_level.py ubuntu@satheesha:~/python$ ls -ltr error.log warning.log info.log -rw-r--r-- 1 ubuntu ubuntu 18 Apr 21 07:45 warning.log -rw-r--r-- 1 ubuntu ubuntu 44 Apr 21 07:45 info.log -rw-r--r-- 1 ubuntu ubuntu 17 Apr 21 07:45 error.log ubuntu@satheesha:~/python$ cat app.log INFO: Service startes WARNING: High CPU INFO: Service startes ERROR: Disk full ubuntu@satheesha:~/python$ cat error.log ERROR: Disk full ubuntu@satheesha:~/python$ cat warning.log WARNING: High CPU ubuntu@satheesha:~/python$ cat info.log INFO: Service startes INFO: Service startes ######################################### does: Reads app.log Filters logs into: error.log warning.log info.log 📊 Outcome: Faster troubleshooting Organized logs for better monitoring Reduced manual effort 🔥 Real DevOps Use Cases: Production log monitoring CI/CD pipeline validation Incident detection and alerting 💡 Key Learning: Python is a powerful tool for automation in DevOps, especially for handling logs and system data. 📈 Next Step: Enhancing this script to: Count log levels Trigger alerts (email/Slack) Monitor logs in real-time (tail -f style) #Python #DevOps #Automation #Scripting #Cloud #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Why YAML Validation with Python Matters in the DevOps World YAML has become the backbone of modern DevOps workflows. From CI/CD pipelines to infrastructure-as-code and Kubernetes manifests, YAML keeps our configurations clean, readable, and structured. But here’s the catch — a single indentation mistake can break an entire deployment. That’s where Python-based YAML validation becomes a game changer. 🔎 Why Validate YAML with Python? ✅ Early Error Detection Catch syntax and structure issues before they reach production. No more failed deployments due to simple spacing or formatting mistakes. ✅ Automation Friendly Python scripts can be easily integrated into CI/CD pipelines to automatically validate YAML files on every commit or pull request. ✅ Custom Validation Rules Beyond syntax checks, Python allows you to enforce business logic — such as required fields, allowed values, or environment-specific configurations. ✅ Improved Reliability Validated YAML means more stable pipelines, fewer rollbacks, and higher confidence in automated deployments. 🛠️ Why This Matters in DevOps In the DevOps ecosystem, YAML powers tools like deployment pipelines, container orchestration, monitoring setups, and infrastructure definitions. Using Python as a validation layer ensures: Cleaner configuration management Safer deployments Faster debugging cycles Better collaboration across teams 💡 Takeaway: Treat YAML validation as a first-class step in your DevOps workflow. A lightweight Python validator today can save hours of troubleshooting tomorrow. GITHUB: https://lnkd.in/dTDADrzS #DevOps #Python #YAML #Automation #InfrastructureAsCode #CICD #SRE #CloudComputing
To view or add a comment, sign in
-
-
🔑 Key Differences Between Scripted and Declarative Pipelines in Jenkins: 📝 Scripted Pipeline- >Definition: Written entirely in Groovy code. >Structure: Free-form, meaning you can write whatever logic you want (loops, conditionals, functions). >Flexibility: Very powerful, but requires programming knowledge. Example: node { stage('Build') { sh 'mvn clean install' } stage('Test') { sh 'mvn test' } stage('Deploy') { sh './deploy.sh' } } 📦 Declarative Pipeline: >Definition: Uses a structured, predefined syntax. >Structure: Must start with a pipeline {} block, and inside you define agent, stages, and steps. >Ease of Use: Easier to read, maintain, and share across teams. Example: pipeline { agent any stages { stage('Build') { steps { sh 'mvn clean install' } } stage('Test') { steps { sh 'mvn test' } } stage('Deploy') { steps { sh './deploy.sh' } } } } Key Difference: Scripted: Maximum flexibility, but harder to learn and maintain. Declarative: Easier, standardized, and recommended for most teams today. ⚡ Concept of Parallel Pipelines: In Jenkins, a parallel pipeline means running multiple tasks (stages) at the same time instead of one after another. This is useful when: >You want to speed up builds (e.g., run tests on different environments simultaneously). >You have independent tasks that don’t depend on each other. >You want to maximize resource usage. Script: pipeline { agent any stages { stage ("Parallel-Test-Cases") { parallel { stage ("TestCase1") { steps { sleep 10 } } stage ("TestCase2") { steps { sleep 10 } } stage ("TestCase3") { steps { sleep 10 } } } } } } Key Points: 1.pipeline {} → Defines the Declarative pipeline. 2.agent any → Jenkins can run this pipeline on any available agent. 3.stages {} → Groups all the stages in the pipeline. 4.stage ("Parallel-Test-Cases") → A single stage that contains parallel branches. 6.parallel {} → Inside this block, multiple stages run at the same time. 7.sleep 10 → Each test case simulates a task that takes 10 seconds. ⚡ What Happens When You Run It- >Jenkins starts the Parallel-Test-Cases stage. >Inside it, TestCase1, TestCase2, and TestCase3 all begin simultaneously. >Each one sleeps for 10 seconds. >Instead of taking 30 seconds sequentially, the pipeline finishes in about 10 seconds total (plus overhead). 📌 Why Use Parallel Pipelines? >Speed: Run independent tasks at the same time (e.g., multiple test suites, builds for different OS versions). >Efficiency: Better use of Jenkins agents/executors. >Scalability: Ideal for large projects with many independent checks.
To view or add a comment, sign in
-
🐧 Shell Scripting Journey - Understanding Variables in Bash! If you're diving into DevOps, scripting is not optional - it's essential! 🚀 There are two major scripting languages you'll encounter: - 🐍 Python Scripting - 💻 Bash Scripting But before jumping into Bash, you need a solid understanding of Linux fundamentals. Once you're comfortable there, Bash scripting starts to make a lot more sense and that's exactly where I am right now! ------------------------------------------------------------------------- 📦 What Are Variables in Bash? Think of a variable as a labeled box where you store data temporarily. Instead of repeating the same value across your script, you define it once and reuse it everywhere! ✅ Basic variable usage: name="david" echo "My name is $name" x=100 echo $x echo $(($x - 10)) # Arithmetic with variables ⚠️ Common mistakes to avoid: - name = "david" ❌ —> No spaces around = sign! - echo name ❌ —> Always use $ to access a variable value! ------------------------------------------------------------------------- 📥 Taking User Input & Passing Arguments 🔹 Interactive Input using read: echo -n "Which service do you want to check?" read service sudo service $service status 🔹 Command Line Arguments - pass values directly when running the script: ./sample.sh pwd free Inside the script: command1=$1 command2=$2 echo "Command-1 : $command1" echo "Command-2 : $command2" 💡 Used heavily in CI/CD pipelines (like Jenkins) to pass environments like Dev/Prod at runtime! ------------------------------------------------------------------------- 🔒 Variable Types - Know the Difference! - 📌 Local Variables —> Live only inside the script. Gone when the script ends. - 🌍 Environment Variables —> Accessible outside the script too. Use export to create them. Gone when terminal closes. - 📁 Shell Variables (in ~/.bashrc) —> Persist even after restart! 🚨 Security Reminder: NEVER hardcode passwords or API keys inside your scripts! Use Environment Variables or dedicated Secret Managers like: - HashiCorp Vault - AWS Secrets Manager This is industry best practice - and it matters! 🔐 If you're on the same journey, let's connect and grow together! 🤝 #Bash #ShellScripting #DevOps #Linux #Automation #CloudComputing #LearnInPublic #DevOpsJourney #variables #BashScripting
To view or add a comment, sign in
-
-
🚀 Python Basics for DevOps Engineers (Practical Examples) Python is a powerful tool for automation in DevOps. Here’s a quick guide to essential data types with real-world use cases 👇 🔹 1. String (str) Used for text (server names, logs, messages) server = "web-server-1" print(server) 💡 DevOps Example: log = "ERROR: Disk full" if "ERROR" in log: print("Issue found") 🔹 2. Integer (int) Used for numbers (CPU, memory, ports) cpu = 75 print(cpu) 💡 DevOps Example: cpu = 85 if cpu > 80: print("Alert: High CPU") 🔹 3. Boolean (True / False) Used for status (running/stopped, success/failure) is_running = True if is_running: print("Service is running") 💡 DevOps Example: deployment_success = False if not deployment_success: print("Rollback required") 🔹 4. List (list) Used to store multiple values (servers, services) servers = ["web1", "web2", "web3"] print(servers[0]) 💡 DevOps Example: services = ["nginx", "docker", "jenkins"] for service in services: print(service) 🔹 5. Combine All (Real Example) servers = ["web1", "web2"] cpu_usage = 85 status = True if cpu_usage > 80: print("Alert: scale up needed") if status: for s in servers: print(f"{s} CPU: {cpu_usage}") 🔹 6. Quick Practice services = ["web1", "web2"] status = True cpu_usage = 85 # fixed variable name if status: for s in services: print(f"Server {s} CPU {cpu_usage}") if cpu_usage > 80: print(f"Alert: CPU {cpu_usage}") Out put: >>> services = ["web1", "web2"] >>> status = True >>> cup_usage = 85 >>> >>> if status: ... for s in services: ... print(f"server {s} CPU {cpu_usage}") ... server web1 CPU 85 server web2 CPU 85 >>> if cpu_usage > 80: ... print(f"Alert: CPU {cpu_usage}") ... Alert: CPU 85 💡 Key Takeaway: Mastering these basics helps automate monitoring, alerts, and system management in real DevOps environments. #DevOps #Python #Automation #Scripting #Learning #AWS #Kubernetes
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development