#100DaysOfDevOps - Day Thirty - Eight Today I continued working on the CI setup for my LoggerBuddy project, and the focus was on something very important: lint testing. After getting Jenkins to successfully check out the code from GitHub, the next step was to start validating the backend code before moving further into the pipeline. What I worked on today: ✅ confirmed the checkout stage was working ✅ moved into the test stage of the CI flow ✅ focused on the Python backend of the app ✅ used Flake8 to perform lint testing ✅ worked inside a Python virtual environment ✅ installed dependencies from requirements.txt ✅ ran checks against the backend code ✅ investigated linting errors related to: blank lines line length whitespace formatting Before automating a task in Jenkins, it helps to first run and understand that task manually. That way, if automation fails later, you can tell whether the problem is in: the pipeline the environment or the application itself Today also reminded me that error messages are not there to scare you. Most times, they are actually pointing you toward the fix if you slow down enough to read them properly. Big takeaway: Good CI is not just about running builds. It is also about enforcing code quality early. Full Video Link: https://lnkd.in/dmCunHmt #DevOps #100DaysOfDevOps #Jenkins #Python #Flake8 #Linting #CICD #ContinuousIntegration #Automation #PlatformEngineering #CloudEngineering #LearningInPublic #TechdotSam
More Relevant Posts
-
We’ve been seeing similar speedups across multiple repos. Tested across different tech stacks with may open source repos. But the bigger shift is behavior. Faster CI → faster feedback → fewer shortcuts. What’s your build time these days?
Cofounder & CEO @Monk CI | Ex-Head Of Engineering at EZAIX | Graduated at IIT R | Ex-SDE @DeutscheBank
We ran the same build. Same code. Same steps. GitHub Actions: 2h 14m 44s | Monk CI: 1h 2m 38s Monk CI was 2.1x faster than Github on Real Rust + Clang + Python build. With just one line changed in workflow.yml. But the speed isn't the point. When CI is slow, teams skip it. Skip tests. Skip lint. Skip security. Ship fast - until Month 6 hits. That's when the auth bug surfaces quietly. That's when the 2am , 500s start. That's when nobody knows which commit broke it. Fast CI isn't a luxury. It's the only thing keeping velocity from becoming liability. Early access is open - DM for access. What's your current build time? Drop it below. #DevOps #CICD #GitHubActions #DeveloperTools #BuildInPublic
To view or add a comment, sign in
-
-
IXP Environment Sync Checker – from “hope” to “know” We manage feature flags across 5 environments — Prod, Dev, Test, Perf, Stage — for 20+ service teams. The painful part? Knowing which flags are live in Production but missing or misaligned in lower environments. It was a manual, error‑prone checklist that nobody enjoyed… and everyone feared breaking. So I built the IXP Environment Sync Checker: a fully automated Python tool that runs every morning and tells each team exactly where they’re out of sync. What it does: Scans Production vs Dev/Test/Perf/Stage using the IXP API Covers 20+ teams in a single run Generates a daily HTML email report via Windows Task Scheduler Shows a summary table at the top with green/red status per team, sorted by out‑of‑sync count Breaks down flags team by team, flag by flag so owners know exactly what to fix We went from zero visibility to daily, automated sync awareness. No more guessing, no more manual cross‑checking dashboards. Small tool. Big reliability win. #Automation #Python #FeatureFlags #SRE #DevOps #Claude #InnerLoopEfficiency #EngineeringProductivity #GitHubCopilot
To view or add a comment, sign in
-
Q1. 🚀 Production Failure Due to Missing Dependencies Today I worked on a DevOps scenario where a Python application deployed inside a Docker container failed to start in production due to missing libraries. 🔍 Issue: The container failed during startup with errors like: ModuleNotFoundError ImportError 💡 Root Cause: Required Python dependencies were not installed inside the container requirements.txt was missing or not used in Dockerfile Environment mismatch between local setup and container 🛠️ Solution: Added a proper requirements.txt with all dependencies Updated Dockerfile to install dependencies: pip install -r requirements.txt Ensured correct Dockerfile structure: Copy requirements first (for caching) Install dependencies Copy application code Rebuilt Docker image: docker build -t my-app . Ran container with restart policy: docker run -d --restart=always my-app ✅ Result: Container started successfully without errors Application ran smoothly in production No manual intervention required after deployment 📌 Key Learning: *Always package dependencies inside the Docker image *Never rely on local environment *Use requirements.txt and proper Docker layering for production-ready builds 🚀 #Linuxworld #devops
To view or add a comment, sign in
-
Open Intelligence Lab v0.5.0: CI/CD Idempotent pipeline added. Getting ready to apply more IaC practices and DevSecOps to the project. The software is now fully versionable in git, repeatable (passing the sequential pipeline every time it runs on a machine via GitLab), and partially scalable. CI reduces failing endpoints and integration errors before they reach any environment. To reduce inconsistencies at the time of code change, the pipeline helps the maintenance of the software. Yet, CD is environment-dependent as we may need to define staging and production environments if needed in the future. Rollback deployment added for version control flexibility and rollback error fix. Pipeline Article: https://lnkd.in/e5qZrFNu CHANGELOG: https://lnkd.in/e9i-Wked Source Code: https://lnkd.in/eb-_v5MX #DevSecOps #IAC #ThreatIntelligence #OpenSource #Python #Docker
To view or add a comment, sign in
-
-
To all the kubernetes admins - stop scrolling through 1,000 lines of redundant YAML! We’ve all been there: You open a production values.yaml for a Helm chart, and it’s a massive wall of text. Out of 800 lines, only 5 are actually different from the defaults. This "Value Fatigue" makes pull reviews harder, increases the risk of configuration drift, and turns maintenance into a game of "Spot the Difference." To solve this, I wrote a lightweight Python utility: helm_yaml_delta.py See the details on my company blog: https://lnkd.in/ejBvKi6P
To view or add a comment, sign in
-
Stop switching between VS Code and Postman 47 times a day. If you're building REST APIs, you know the drill: → Write code in VS Code → Switch to Postman to test → Get an error → Switch back to VS Code → Fix it → Switch to Postman again → Repeat until you lose your mind There's a better way: REST Client extension. What makes it different? ✅ HTTP requests live IN your code editor ✅ .http files tracked in Git (no more lost Postman collections) ✅ Zero context switching ✅ Team collaboration via pull requests ✅ Works exactly like writing code I wrote a complete guide covering: - Installation & setup - GET/POST/PATCH/DELETE patterns - Multiple requests with ### separators - Error handling (405, 400, etc.) - REST Client vs Postman comparison Read the full guide: https://lnkd.in/dEZaqXDx #WebDevelopment #Django #Python #VSCode #API #RestAPI #BackendDevelopment
To view or add a comment, sign in
-
𝐘𝐨𝐮𝐫 𝐃𝐨𝐜𝐤𝐞𝐫 𝐁𝐮𝐢𝐥𝐝 𝐖𝐨𝐫𝐤𝐞𝐝 𝐘𝐞𝐬𝐭𝐞𝐫𝐝𝐚𝐲. 𝐓𝐨𝐝𝐚𝐲 𝐈𝐭 𝐅𝐚𝐢𝐥𝐬. [Docker Deep Dive — Day 3/5] It is a common question in interviews how you will approach this situation. You changed nothing. Same Dockerfile. Same code. But the build crashes. The culprit? Your cargo manifest has no quantities. Every Docker build fetches dependencies fresh. If you write RUN pip install requests, Docker grabs whatever the latest version is that day. Today that version conflicts with your other packages. Your ship sinks at the dock. A cargo ship loads hundreds of crates. The manifest says "50 boxes of bolts." Without a size specification, the port loads whatever bolt size arrives first. One wrong crate and the engine cannot be assembled. Version pinning is your exact specification. requests==2.28.2 means only that bolt, that size, every single time. dockerfile # Unpinned — dangerous RUN pip install requests numpy opencv # Pinned — safe, reproducible COPY requirements.txt . RUN pip install -r requirements.txt text # requirements.txt requests==2.28.2 numpy==1.24.0 opencv-python==4.7.0.72 𝐅𝐀𝐐: Q: How do you handle a dependency conflict? Check which package demands which version. Update the library that has flexibility, pin everything explicitly, rebuild. Q: What are Linux package issues inside containers? Base images like python:3.11-slim strip non-essential packages. If your app needs libpq or gcc, add RUN apt-get install explicitly — otherwise the crate simply does not exist on board. Q: Why redeploy after fixing a dependency? The image is already baked wrong. Fix the Dockerfile, rebuild the image, redeploy the container. No patch reaches a running ship mid-voyage. Tomorrow: Docker Swarm vs Kubernetes — why did the whole industry switch? #DevOps #Docker #Dependencies #Containers #DevOpsInterview #CloudEngineering #DockerDeepDive
To view or add a comment, sign in
-
-
I didn’t build my first CI/CD pipeline in one go. I broke it… multiple times. ❌ Docker build failed ❌ YAML errors ❌ GitHub Actions failing again and again At one point, nothing was working. But I kept debugging. Step by step: → Fixed Dockerfile issues → Understood GitHub Actions workflow → Added testing using pytest → Rebuilt the pipeline And finally… ✅ CI/CD pipeline running successfully ✅ Docker image built via GitHub Actions ✅ Pulled and ran the container locally This wasn’t just about tools. It was about learning how real engineering works: fail → debug → fix → repeat → succeed 💡 Built using: - Flask - Pytest - Docker - GitHub Actions This is my first step into DevOps — and definitely not the last. #CI_CD #DevOps #Docker #GitHubActions #Python #Flask #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
-
I open-sourced 𝗱𝗲𝗰𝗸 — think docker-compose, but for your whole local dev stack, not just containers. It started as a Makefile + shell scripts I built for onboarding a Go+React monorepo — postgres, keypair generation, DB setup, migrations, services with log tailing and cleanup. One command to start everything. Worked well, totally hardcoded to one project. I looked at the result and thought: this should be a real tool. 𝗱𝗲𝗰𝗸 𝘂𝗽 — one YAML config, one command: • Deps with multi-strategy fallback (docker → brew → whatever) • Idempotent bootstrap with interactive prompts for first-time setup • Service dependencies with readiness checks (depends_on + ready) • Env vars from strings, shell scripts, or structured files (JSON/YAML/TOML/INI) • Colored log tailing, crash recovery with auto-restart • deck.local.yaml for personal overrides — you use docker, I use brew Also: deck doctor to diagnose your stack, deck run api -- goose up for one-off commands in a service's context, selective targeting (deck up api webapp), and stack-aware deck init that detects your project type. Single Go binary, ~2500 LOC, 7 releases. Available via Homebrew. Built with Claude Code handling implementation and Codex running execution and review passes on chunks of work. I focused on design and architecture. Honest take: the velocity is wild, but you still need to know what you're building and catch what it gets wrong. Named after the cyberdeck from Shadowrun 🤖 https://lnkd.in/dRhUvhg3
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development