🚀 #100DaysOfDevOps – Day 11 Today I worked on Jenkins installation and CI/CD automation, understanding how builds are triggered in real-time environments. 🔹 Step 1: Jenkins Installation (Server Setup) Commands: yum install java-17-amazon-corretto -y yum install jenkins -y systemctl start jenkins ✔ Scenario: Setting up Jenkins server on EC2/Linux machine for CI/CD pipelines 🔹 Step 2: Install Git on Jenkins Server ✔ Scenario: Jenkins needs Git to pull code from repositories 🔹 Step 3: Connect Jenkins with GitHub Repo ✔ Scenario: Linking project repository to Jenkins for automated builds Steps: • Create Jenkins job • Add GitHub repo URL • Configure credentials (username + token) 🔹 Step 4: Build Automation (Real-Time Triggering) 🚀 When developer pushes code → Jenkins automatically triggers build ✔ Webhook (Real-time trigger) • Scenario: Immediate build after code push (used in production CI/CD) ✔ Poll SCM (Scheduled check) • Scenario: Jenkins checks repo every few minutes for changes ✔ Build Periodically • Scenario: Nightly builds / scheduled jobs 🔹 Real-Time Workflow (End-to-End) 👨💻 Developer pushes code → 🔗 GitHub repo updated → ⚙️ Jenkins triggered (Webhook) → 🏗️ Build starts → 🧪 Tests executed → 🚀 Deployment pipeline triggered 💡 Jenkins is the backbone of CI/CD pipelines, enabling automation, faster delivery, and reduced manual effort. From manual builds → to fully automated pipelines. 💪 #Jenkins #DevOps #CICD #Automation #CloudEngineering #100DaysChallenge #ContinuousLearning
Jenkins Installation and CI/CD Automation on EC2/Linux Machine
More Relevant Posts
-
Automating Package Installation with CI/CD on Jenkins Today's task was all about automation and consistency, turning manual server tasks into repeatable CI/CD workflows. Today, I worked with Jenkins to create a job that automates package installation on a remote server within the Stratos Datacenter. Instead of manually installing packages every time, I configured a parameterized job that accepts a dynamic input, allowing flexibility in what gets installed. I set up a string parameter to pass package names at runtime and configured the job to execute installation commands directly on the target server. After ensuring all required plugins were installed and Jenkins was properly restarted, I tested the job by running builds with different package inputs. To validate the setup, I successfully installed packages like docker and nginx, confirming that the automation works reliably across multiple runs. This not only saved time but also ensured consistency in package management across environments. This experience reinforced the importance of: - Automating repetitive system administration tasks using CI/CD tools - Building parameterized jobs for flexible and reusable workflows - Ensuring reliability through repeated job execution and validation - Managing dependencies and plugins effectively in Jenkins - Bridging the gap between infrastructure and automation Every step like this brings me closer to building fully automated, self-service infrastructure systems. Still learning. Still building. Still pushing forward. 🚀 #DevOps #Jenkins #CICD #Automation #CloudComputing #ContinuousLearning #TechJourney #Day71
To view or add a comment, sign in
-
-
🔷 🔧 Jenkins CI/CD Flow Architecture 1️⃣ Developer Commit Code Developers push code to a version control system like GitHub / GitLab / Bitbucket. 2️⃣ Source Code Management (SCM) Jenkins continuously monitors the repository using webhooks or polling. 3️⃣ Jenkins Trigger Once a new commit is detected: Jenkins job gets triggered automatically Pipeline execution starts 4️⃣ Build Stage Jenkins performs: Code checkout Build using tools like Maven / Gradle / Node/npm Dependency resolution 5️⃣ Unit Testing Stage Executes automated test cases Validates code quality before deployment 6️⃣ Code Quality Check Integrated with tools like SonarQube Ensures code standards and security checks 7️⃣ Artifact Generation Build artifacts (WAR / JAR / Docker image) are created Stored in repository like Nexus / Artifactory 8️⃣ Deployment Stage Jenkins deploys application to: Dev Environment QA / UAT Environment Production Environment (with approval gate) Common deployment targets: EC2 instances Kubernetes cluster Docker containers 9️⃣ Post Deployment Verification Smoke testing Health checks Monitoring via CloudWatch / Prometheus #DevOps #CICD #AWS #Linux #Jenkins #Docker #Kubernetes #CloudWatch #Automation #Monitoring #DevOpsEngineer
To view or add a comment, sign in
-
-
🚀 Common Jenkins Pipeline Failures & How to Fix Them While working on CI/CD pipelines in Jenkins, I encountered several common issues that can delay builds and deployments. Here are some key problems and their solutions 👇 🔴 1. Build Stuck in Queue 👉 Issue: Pipeline is not starting and remains in queue ✅ Solution: Check available executors under Jenkins nodes Verify if agents are online Increase executor count if needed 🔴 2. Agent/Node Not Available 👉 Issue: “No nodes available” or agent offline ✅ Solution: Ensure agent service is running Reconnect the node from Jenkins UI Verify labels match in pipeline configuration 🔴 3. Pipeline Script Errors 👉 Issue: Syntax errors in Jenkinsfile ✅ Solution: Validate syntax using Jenkins Pipeline Syntax tool Check for missing brackets, incorrect stages 🔴 4. Plugin Issues 👉 Issue: Pipeline fails after plugin updates ✅ Solution: Update all plugins to compatible versions Restart Jenkins after updates Check plugin dependencies 🔴 5. SCM (Git) Checkout Failure 👉 Issue: Unable to fetch code from repository ✅ Solution: Verify repository URL and credentials Check network connectivity Ensure proper access permissions 🔴 6. Permission Denied Errors 👉 Issue: Shell scripts fail due to permission issues ✅ Solution: Use chmod +x for executable scripts Verify user permissions on server 🔴 7. Environment Variable Issues 👉 Issue: Variables not recognized in pipeline ✅ Solution: Define variables properly in Jenkinsfile Use env.VARIABLE_NAME syntax 🔴 8. Disk Space Issues 👉 Issue: Builds fail due to low disk space ✅ Solution: Clean up old builds Configure build retention policies 🔴 9. Timeout Errors 👉 Issue: Pipeline fails due to long-running steps ✅ Solution: Increase timeout settings Optimize build steps 🔴 10. Dependency/Build Tool Failures 👉 Issue: Maven/Gradle/npm build failures ✅ Solution: Verify dependencies Clear cache and rebuild Check tool versions 💡 Key Takeaway: Most Jenkins failures are due to configuration, environment, or resource issues. Regular monitoring and maintenance can prevent major pipeline disruptions. #Jenkins #DevOps #CICD #Automation #Learning #Cloud #BuildFailures
To view or add a comment, sign in
-
🚀 Automating CI/CD with GitLab + Jenkins — Build Smarter, Deliver Faster Still building and deploying manually? You’re leaving speed, quality, and scalability on the table. Here’s how a modern automation pipeline works 👇 🔶 Step 1: Code Push (GitLab) Developer pushes code → triggers the pipeline 🔗 Step 2: Webhook Trigger GitLab sends a webhook → Jenkins job starts automatically ⚙️ Step 3: Jenkins Pipeline ✔ Checkout code ✔ Build the project ✔ Run automated tests ✔ Perform code analysis 📦 Step 4: Artifact Management Generate and store build artifacts 🚀 Step 5: Deployment Deploy to dev / staging / production seamlessly 💡 Why this matters: ✔ Faster releases ✔ Early bug detection ✔ Consistent builds ✔ Reduced manual effort ✔ Scalable architecture 🔥 Pro Tip: Use Jenkinsfile (Pipeline as Code) + secure webhooks + proper environment separation for a production-grade setup. Automation isn’t optional anymore — it’s the backbone of modern software delivery. How are you handling your CI/CD pipelines today? 🤔 #DevOps #CICD #Jenkins #GitLab #Automation #SoftwareEngineering #BuildPipeline #TechLeadership #DeveloperLife
To view or add a comment, sign in
-
-
🚀 Day 4 of My Jenkins Journey — Testing, Artifacts & Deployment Today I moved closer to real-world CI/CD by adding testing and deployment steps to my pipeline. This is where Jenkins starts handling the full lifecycle: build → test → package → deploy 🔥 Here’s what I explored 👇 🔹 Running Tests in Pipeline Automating test execution as part of the build process. stage('Test') { steps { echo 'Running tests...' } } 🔹 Build Tools Integration Using tools like Maven/Gradle to build Java applications. stage('Build') { steps { sh 'mvn clean install' } } 🔹 Artifacts Files generated after build (e.g., JAR/WAR). 🔹 Archive Artifacts archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true Stores build output for future use. 🔹 Deploy Step Automating deployment to server or container. stage('Deploy') { steps { echo 'Deploying application...' } } 🔹 Post Actions post { success { echo 'Build successful 🎉' } failure { echo 'Build failed ❌' } } Runs actions based on build result. 💡 Biggest takeaway: Jenkins pipelines don’t just build code — they test, package, and deploy applications automatically. This is how real-world CI/CD pipelines work in production. Almost there. Final step: advanced pipelines & optimization 🚀 #Jenkins #DevOps #CICD #Automation #LearningInPublic #100DaysOfCode
To view or add a comment, sign in
-
🚀 JENKINS SERIES – DAY 5: FROM FREESTYLE JOBS TO PIPELINES Hello everyone! After a short break, I’m excited to continue my Jenkins series. Till now, we have been working with Freestyle Jobs, where we simply select the options provided by the Jenkins UI. However, in real-time projects, we need more control, flexibility, and automation. 👉 That’s where Jenkins Pipelines (Pipeline as Code) come into the picture. With pipelines, we can automate the complete CI/CD workflow: ✔️ Fetch code from GitHub ✔️ Build the application using Maven ✔️ Run tests ✔️ Store artifacts in Nexus (for versioning & rollback) ✔️ Deploy the application using Tomcat 📌 BASIC PIPELINE STRUCTURE: pipeline { agent any stages { stage('Code') { steps { // Fetch code from GitHub } } } } 👉 In a similar way, we can add multiple stages like Build, Test, Artifact, and Deploy to complete the pipeline. ⚙️ Also, plugins play a very important role (heart of Jenkins) in extending its functionality. Here are a few plugins I used: 🔹 Pipeline Stage View – to visualize pipeline execution (success/failure) 🔹 Nexus Artifact Uploader – to store artifacts in Nexus 🔹 Deploy to Container – to deploy applications to Tomcat 💡 Pipelines are widely used in real-time DevOps projects as they provide better control, reusability, and scalability. 👉 For a detailed step-by-step explanation, please check out my Medium article below 👇 https://lnkd.in/dJgGnkMC #DevOps #Jenkins #CICD #Automation #Docker #Kubernetes #LearningJourney
To view or add a comment, sign in
-
-
From Manual Deployments to Full Automation — Jenkins Journey 🥳 There’s a kind of developer pain we don’t talk about enough. Not the big architecture struggles — but the quiet frustration of doing the same manual steps over and over, knowing there must be a better way. That was me, not too long ago. Where It All Started My deployment routine was always the same: Open terminal → SSH → pull code → build → watch logs → hope nothing breaks. Simple on paper. Draining in reality. Doing this multiple times a week (sometimes a day) turned it into a repetitive, error-prone ritual. So ,wrote a shell script to automate the steps. It felt like progress — one command, no missed steps. But the script still needed me. I still had to run it, remember it, and sit there during every deployment. Faster? Yes. Automatic? Not really. That’s what finally pushed toward Jenkins. The First Time I Opened Jenkins,At first glance, Jenkins looked complicated — pipelines, jobs, stages everywhere. But once I understood it, everything clicked. Jenkins simply automated what I’d been doing manually: Pull → Build → Test → Deploy Same flow, just automatic, consistent, and independent of me. Once that sank in, automation finally made sense. The entire thought process became a pipeline that triggered every time I pushed code — no forgotten steps, no shortcuts, no “bad days.” What Jenkins Actually Is 😌 Jenkins is an open-source automation server that watches your repo, detects changes, and runs your CI/CD pipeline. No reminders. No manual steps. No human dependency. Its strengths are flexibility and control — self-hosted, customizable, and supported by 1800+ plugins. It works for everything from small personal projects to large enterprise systems. And yes, it’s free. Why Jenkins Still Matters 😌 With tools like GitHub Actions, GitLab CI, and CircleCI around, why does Jenkins still thrive? Because of control. Managed tools are great for simple workflows. But when you have custom infrastructure, hybrid environments, complex deployment logic, or strict data requirements, their limitations show. Jenkins adapts to your process — not the other way around. It’s mature, stable, well-documented, and trusted. #Jenkins #DevOps #CICD #Automation #RealTalk #DeveloperLife #BuildInPublic #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
♻️ Jenkins Lessons & Best Practices — What Really Matters in CI/CD After working with Jenkins across both VM-based and Docker environments, I’ve learned that success with CI/CD isn’t about just “making pipelines run” — it’s about making them reliable, secure, and scalable. Here are some key lessons and best practices I follow: ➡️ Lessons from Real Usage 💡 Jenkins works best when treated as code, not just a UI tool 💡 Environment consistency is critical (Docker agents help a lot) 💡 Debugging pipelines teaches you more than writing them 💡 Simplicity > over-engineering in pipeline design ➡️ Best Practices I Rely On ✔️ Use Jenkinsfile (Pipeline as Code) instead of freestyle jobs ✔️ Store secrets in Jenkins Credentials Manager (never hardcode) ✔️ Use Docker agents for consistent and reproducible builds ✔️ Keep plugins minimal and regularly updated ✔️ Clean workspaces to prevent disk space issues ✔️ Implement proper access control & security hardening ✔️ Use structured logging and monitor builds proactively ➡️ Setup Insights ✔️ Ubuntu install → stable, more control (/var/lib/jenkins) ✔️ Docker setup → fast, portable (/var/jenkins_home) Both have their place depending on the use case. Jenkins has been a solid foundation for understanding how modern CI/CD systems should be designed and maintained at scale. #Jenkins #DevOps #CI_CD #Automation #Docker #BestPractices #CloudComputing
To view or add a comment, sign in
-
-
The Death of "It Works on My Machine" 💀 If you are still manually moving code from Git to your servers, you aren't just slow—you’re a bottleneck. Connecting Git to Jenkins isn’t just a technical configuration; it’s the transition from "manual labor" to "automated engineering." Here is the logic of a high-performing CI/CD pipeline: The Logic of Automation Continuous Integration (CI): Stop guessing if your code works with the team's. Every push triggers an automated build and test. If it breaks, you find out in seconds, not during the release meeting. Continuous Delivery: Your software is always in a "ready-to-ship" state. No more last-minute packaging scrambles. Continuous Deployment (CD): The ultimate goal. Code passes tests -> Code goes live. Zero human intervention. Why Most People Fail They treat Jenkins as a "nice to have." It’s not. In modern DevOps, if it isn't automated, it’s broken. Automation eliminates the "human factor"—which is usually where the errors live. The Standard Workflow Commit: Developer pushes to Git. Trigger: Webhooks alert Jenkins. Build/Test: Jenkins validates the environment. Feedback: Instant pass/fail notification. Stop being a manual gatekeeper. Start being a DevOps engineer. #DevOps #Jenkins #CICD #SoftwareEngineering #Automation #Git #CloudComputing
To view or add a comment, sign in
-
-
Question:-53 Deployment frequency is high, but failures are also increasing. How would you stabilize CI/CD pipelines using tools like Jenkins and GitHub Actions? 1.To stabilize CI/CD pipelines while maintaining high deployment frequency, we must shift from simple automation to a focus on reliability and observability. 2.Advanced teams use retry logic or separate "quarantine" jobs to prevent these from blocking the main pipeline until they are fixed. 3.Build a single artifact once (e.g., a Docker image) and promote it through staging to production to eliminate "works on my machine" bugs caused by environment-specific rebuilds. 4.Prefer the Declarative Pipeline syntax for its built-in error handling and readability compared to complex Groovy scripts. 5.Centralize and version common pipeline logic across multiple projects by using Jenkins Shared Libraries to reduce maintenance burdens and ensure consistency. 6. Use the concurrency key to prevent multiple deployments to the same environment from overlapping, which can cause resource contention and failures. 7.Establish clear metrics to track Mean Time to Recovery (MTTR) and Change Failure Rate. Integrating tools like Prometheus and Grafana with Jenkins, or using GitHub's built-in Actions insights. For aws&Devops practical learning ping me 9154078579
To view or add a comment, sign in
Explore related topics
- Cloud-native CI/CD Pipelines
- Automated Deployment Pipelines
- CI/CD Pipeline Optimization
- How to Implement CI/CD for AWS Cloud Projects
- Continuous Integration and Deployment (CI/CD)
- Trigger-Based Workflow Automation
- Jenkins and Kubernetes Deployment Use Cases
- Deployment Workflow Automation
- Continuous Deployment Techniques
- How to Improve Software Delivery With CI/cd
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development