𝐃𝐞𝐯𝐎𝐩𝐬 𝟏𝟎𝟏 𝐟𝐨𝐫 𝐏𝐲𝐭𝐡𝐨𝐧𝐢𝐬𝐭𝐚𝐬 🐍 | 𝐖𝐞𝐞𝐤 𝟐: 𝐒𝐭𝐨𝐩 𝐏𝐮𝐬𝐡𝐢𝐧𝐠 "𝐁𝐫𝐨𝐤𝐞𝐧" 𝐂𝐨𝐝𝐞 Last week, we talked about CI pipelines. But why wait for the pipeline to fail when you can catch errors before you even hit git push? 💡 𝐓𝐡𝐞 𝐅𝐚𝐜𝐭: Pre-commit hooks allow you to run automated checks locally during your git commit process. If the checks fail, the commit is blocked. Why this is a game-changer for your workflow: ✅ No More "Fix Linting" Commits Tired of seeing "fix linting" or "format code" in your git history? Tools like "ruff check" and "ruff format" run automatically. Your repo stays clean, 100% of the time. ✅ Beyond Just Python It’s not just for .py files. You can automatically format YAML configs, check JSON syntax, or even strip large data files before they accidentally enter your history. ✅ Instant Feedback Loop Instead of waiting 3 minutes for a CI runner, you get feedback in 0.5 seconds. It forces you to fix issues while the code is still fresh in your mind. 🛠️ 𝐏𝐫𝐨 𝐓𝐢𝐩: Use the pre-commit framework. A simple .pre-commit-config.yaml is all you need to orchestrate Ruff, MyPy, or even Secret-Detection (to prevent pushing API keys). 𝐓𝐡𝐞 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞: Shifting your quality checks "left" (closer to the dev) saves time, reduces CI costs, and makes you a more disciplined engineer. #Python #DevOps #Git #PreCommit #Ruff #SoftwareEngineering #CleanCode
Gaweng Tan’s Post
More Relevant Posts
-
🚀 Understanding GitLab Runner & Artifacts in CI/CD (The Simple Way) Many people use CI/CD daily. But not everyone fully understands what’s happening behind the scenes. Let’s break it down 👇 🏃♂️ How GitLab Runner Works When you push code to GitLab: 1️⃣ A pipeline gets triggered 2️⃣ A job is assigned to a GitLab Runner 3️⃣ The runner spins up an environment (for example, a Python image) 4️⃣ Your script runs inside that isolated environment 5️⃣ Once the job finishes… the environment is destroyed That last part is important. Destroyed. Gone. Clean slate. So anything your Python script generated - reports, logs, JSON files, build outputs - disappears unless you explicitly save it. That’s where 📦 Artifacts come in - 📦 How GitLab Artifacts Help Artifacts allow you to: ✅ Save files generated during a job ✅ Pass outputs from one stage to another ✅ Download reports from the GitLab UI ✅ Keep logs for debugging ✅ Maintain traceability in deployments Instead of losing your job outputs when the runner environment shuts down, artifacts preserve them for a defined duration. Think of it like this: Runner = Executes your task ⚙️ Artifacts = Preserve the results 📦 Without artifacts → Your pipeline is temporary With artifacts → Your pipeline becomes structured and reliable. 💡 Common Python Use Cases: • Saving test coverage reports • Storing automation results • Passing build packages to deployment stage • Keeping generated configuration files and many more. CI/CD isn’t just about automation speed. It’s about managing outputs intelligently. #GitLab #CICD #DevOps #Python #Automation #Cloud #SoftwareEngineering
To view or add a comment, sign in
-
-
The Technical Foundation of CI: Beyond Branching Strategies Following my previous post on CI branching models, it is essential to address the technical infrastructure required to sustain these workflows. A branching strategy like Trunk-Based Development or GitHub Flow only succeeds if supported by a robust automated pipeline. To achieve true Continuous Integration, your pipeline must excel in three critical areas: 1. Automated Verification (The Safety Net): Integration is meaningless if you are integrating broken code. A mature CI pipeline triggers a suite of Unit, Integration, and Linting tests the moment a commit is pushed. The goal is "fail fast" detecting regressions in minutes rather than during manual QA. 2. Environment Parity (The "It Works on My Machine" Cure): CI must run in an environment that mirrors production. This is where Containerization (Docker) becomes indispensable. By packaging the application with its dependencies, you ensure that the "Build" stage produces a consistent artifact that will behave identically in Staging and Production. 3. Fast Feedback Loops: The value of CI diminishes as build times increase. High-performing teams optimize their pipelines using Parallelization and Caching (e.g., GitHub Actions Cache or Docker Layer Caching). A developer should know if their integration was successful within 5–10 minutes of pushing code. The Synthesis: While your branching strategy defines the process, your pipeline defines the reliability. You cannot move to a high-velocity model like Trunk-Based Development without first investing in automated testing and containerization. #DevOps #SoftwareEngineering #Coding #CI #ContinuousIntegration #TechCommunity #Python #Django
To view or add a comment, sign in
-
-
Stop shipping "bloated" 1GB Docker images that take an eternity to build and deploy. Get the Docker Hub Toolkit for Claude Code here: 👉 https://lnkd.in/dgk55CKx Give it a star ⭐ and let us know if it speeds up your Docker-Hub workflow! Containerizing Python applications is often a trade-off between speed, size, and security. Most developers settle for "good enough" Dockerfiles that slow down CI/CD pipelines and increase the attack surface. We’ve built the Docker Hub Toolkit, a production-grade skill for Claude Code designed to automate the heavy lifting of end-to-end Docker deployment with industry-best practices baked in. What makes this toolkit a game-changer? Extreme Image Optimization: Automatically generates 4-stage Dockerfiles that shrink Python images from ~1GB to ~150MB using python:3.12-slim. Lightning-Fast Rebuilds: Implements BuildKit cache mounts (--mount=type=cache) so your dependency installation doesn't restart from scratch every time you change a line of code. The 10-Point Quality Gate: Includes an automated validation script that checks for non-root users, secret leaks, and layer ordering before you ever push to the hub. CI/CD on Autopilot: Generates complete Docker Hub Actions workflows featuring Docker Scout vulnerability scanning and multi-platform (amd64/arm64) support. Production-Ready by Default This isn't just about writing a file; it's about a standardized engineering workflow: Generate optimized multi-stage builds. Validate against security best practices. Build & Tag using semantic versioning and Git SHAs. Secure via Docker Scout scanning. Deploy with integrated GitHub Actions templates. Why use a Claude Skill for this? Instead of manually managing .dockerignore files or troubleshooting cross-platform buildx errors, you can now delegate the entire containerization strategy to Claude. It ensures consistency across your team and eliminates the "it worked on my machine" Docker headache. Ready to optimize your deployment? Check out the documentation and scripts in the repository: 👇 https://lnkd.in/dgk55CKx #Docker #Python #DevOps #ClaudeCode #CloudNative #GitHubActions #SoftwareEngineering #Containerization
To view or add a comment, sign in
-
-
Have you ever wondered why 𝑮𝒊𝒕 is so smart at detecting changes, even with the smallest tweaks or when you move an entire block of code around? The secret is the 𝑴𝒚𝒆𝒓𝒔 𝑫𝒊𝒇𝒇 𝑨𝒍𝒈𝒐𝒓𝒊𝒕𝒉𝒎, which solved the 𝐍𝐨𝐢𝐬𝐲 𝐃𝐢𝐟𝐟 problem found in older methods. Simple algorithms often see a code refactor as a total "delete and replace" resulting in a messy diff that's almost impossible to read. The core of Myers' intelligence is the 𝐋𝐨𝐧𝐠𝐞𝐬𝐭 𝐂𝐨𝐦𝐦𝐨𝐧 𝐒𝐮𝐛𝐬𝐞𝐪𝐮𝐞𝐧𝐜𝐞 (LCS). Basically, the algorithm hunts for the longest "thread" of code that remained unchanged between the old and new versions. It then builds the diff around that thread to give you the Shortest Edit Script possible. This approach is exactly why your git diff stays logical and clean, making 𝐏𝐮𝐥𝐥 𝐑𝐞𝐪𝐮𝐞𝐬𝐭𝐬 much easier to review regardless of how much refactoring you’ve done. Read the full blog : 🔗 https://lnkd.in/ez2atqRm #Git #Algorithms #SoftwareEngineering #Refactoring #LCS
To view or add a comment, sign in
-
-
🚀 From Manual to Magic: CI in Automation with GitHub Actions! Ever wondered what happens behind the scenes every time you push code to GitHub? Let me show you how I turned my Playwright Python automation project into a self-running CI machine with GitHub Actions. 🛠️ ✅ What I did: Created workflows that trigger automatically on every push & pull request Run all tests with a single click using pytest Generated Allure reports automatically for every run Added notifications for test failures (because missing errors is NOT an option!) 💡 Why it matters: Continuous Integration isn’t just a buzzword. For automation frameworks, it means: No more forgotten tests before release ✅ Immediate feedback on code quality ✅ Faster, safer, more reliable deployments ✅ 🔥 Dummy Example: Imagine you push a new login test for your app. Instantly: GitHub Actions spins up your workflow Tests run on a clean environment Allure report is generated automatically You get notified if anything breaks All while you sip your morning coffee ☕ This is the future of automation testing: code pushes trigger tests, tests trigger reports, and reports trigger action, no manual steps, no excuses! #AutomationTesting #GitHubActions #CI #Playwright #Python #AllureReports #ContinuousIntegration #QA #SoftwareTesting #DevOps
To view or add a comment, sign in
-
-
🚀 DevOps Journey Day 41: Built & Ran My First GitLab CI/CD Pipeline – 41 Days Unbroken! 🔥 Day 41 is official – 41 consecutive days of learning, building, and sharing publicly. Consistency is paying off big time! Today I created my first real **GitLab CI/CD pipeline** on gitlab.com. Steps I followed: 1. Created a new project on gitlab.com 2. Pushed a simple app (Java/Spring Boot or Python/Node) to the repo 3. Added .gitlab-ci.yml in the root 4. Committed & pushed → pipeline auto-triggered instantly 5. Watched it run in the Pipelines tab (very clean UI!) #DevOps #GitLab #GitLabCI #CI_CD #PipelineAsCode #DevOpsJourney #LearningInPublic #Day41 text
To view or add a comment, sign in
-
Here’s how I automated a full Docker build‑and‑push pipeline using GitHub Actions. 🔹 Build a simple Python app 🔹 Containerize it with Docker 🔹 Push the image to Docker Hub 🔹 Automate the entire flow using GitHub Actions No manual steps. Just clean automation. 𝗪𝗵𝗮𝘁 𝗜 𝗕𝘂𝗶𝗹𝘁 🔹𝗔𝗽𝗽: A basic Flask app 🔹𝗗𝗼𝗰𝗸𝗲𝗿𝗳𝗶𝗹𝗲: Python 3.9 slim image, installs Flask, runs the app 🔹𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄: GitHub Actions pipeline triggered on every push 🔹𝗥𝗲𝘀𝘂𝗹𝘁: Image pushed to Docker Hub 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗖𝗜/𝗖𝗗 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗗𝗼𝗲𝘀 🔹Checks out the repo. 🔹Builds the Docker image. 🔹Authenticates using secrets (`DOCKERHUB_USERNAME`, `DOCKERHUB_TOKEN`). 🔹Pushes the image to Docker Hub. 🔹Echoes the image pull URL for verification. All defined in `.github/workflows/ci.yaml` 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 This was the moment CI/CD stopped being theory and became something I could 𝘣𝘶𝘪𝘭𝘥. It’s the foundation for real‑world DevOps: 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗯𝘂𝗶𝗹𝗱𝘀, 𝘀𝗲𝗰𝘂𝗿𝗲 𝘀𝗲𝗰𝗿𝗲𝘁𝘀, 𝗿𝗲𝗽𝗿𝗼𝗱𝘂𝗰𝗶𝗯𝗹𝗲 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀. #ci #cicd #devops #githubactions #CoderCo
To view or add a comment, sign in
-
-
Building Self-Validating CI Pipelines with Python & GitHub Actions For years, our QA workflow depended on people triggering pipelines. Run this job. Re-run that one. Manually check reports. That was the bottleneck. So we flipped the model. Now, pipelines validate themselves. Using Python and GitHub Actions, we built guardrails directly into CI: - Tests decide whether they should run based on code changes - Failures auto-classify (test issue vs infra vs product defect) - Quality gates stop bad builds before they reach QA Example: If a backend-only change is detected, UI tests are skipped automatically. If flaky patterns are detected, the pipeline flags instability instead of failing the build blindly. The outcome: - Faster feedback without manual intervention - Fewer false failures reaching QA - QA focused on analysis, not babysitting pipelines The shift wasn’t adding more checks. It was teaching the pipeline how to reason. That’s when CI stops being a tool — and becomes a system. #CICD #DevOps #QualityEngineering #TestAutomation #GitHubActions #Python #SDET
To view or add a comment, sign in
-
-
I used to be a "Manual Explainer." 🗣️ Happy Ansible Tuesday! For a long time, I felt like I was using Ansible "the wrong way." While everyone else was struggling with massive, nested JSON blocks in the uri module, I started writing my own custom Python modules for our internal APIs. I didn't care if it wasn't the "standard" path; it allowed for true idempotency and much cleaner playbooks than a series of hacked-together curl requests. But my "cleaner code" created a new problem: I became a human encyclopedia. Because I skipped the docstrings, I was the only person who knew how to use my modules. My Slack was a constant stream of "What’s the argument for this?" and "What does this return?" I realized that writing the logic is only half the battle. If your code doesn't explain itself to the rest of the team, you haven't actually automated anything, you've just moved the work from the terminal to your inbox. The "Wrong Way" Benefits: 🛠️ Custom Logic: Native modules for niche APIs don't exist; build your own. ✨ Cleaner Playbooks: Replace 50 lines of uri tasks with 5 lines of custom module code. 🛡️ Real Idempotency: You control exactly how the module checks state before making changes. I stopped worrying about doing it "right" and started focusing on making it documented. ❓ Question of the Day: PDF below. It focuses on the specific CLI command used to view those module examples—something that only works if you've written your docstrings correctly! 👇 I've dropped a breakdown on how to use Antsibull to automate your custom module documentation in the comments! #Ansible #DevOps #Automation #SoftwareEngineering #Python #DamnitRay #QOTD
To view or add a comment, sign in
-
📘 Day 1 OOPs Concept – Class & Object Welcome to Day 1 of my journey to level up from Manual Testing to Automation Testing 🚀 Today, we’re covering one of the core fundamentals of Java — Class and Object, which form the foundation of OOPs and test automation frameworks. 🔹 Understanding what a class is 🔹 How objects are created from a class 🔹 Why this concept is critical for Java-based automation Every automation journey starts with strong basics 💡 🔹 Class A class is a blueprint or template that defines the attributes (variables) and behaviors (methods) of an object. It is a logical entity A single class can have n number of objects It does not occupy memory until an object is created Example: Animal is a class Attributes: name, color, age Behaviors: eat(), sleep() 🔹 Object An object is an instance of a class. It is a real-world entity It occupies memory Multiple objects can be created from the same class 🧑💻 Simple Java Example class Animal { String name; int age; void eat() { System.out.println("Animal is eating"); } } public class Main { public static void main(String[] args) { // Creating an object of Animal class Animal dog = new Animal(); dog.name = "Dog"; dog.age = 5; dog.eat(); } } 📝 Explanation: 👉 Animal is the class 👉 dog is the object of the Animal class 👉 Using the object, we access variables and methods of the class Automation frameworks (like Page Object Model) are built using classes and objects. Each web page is represented as a class, and its elements and actions are defined using methods. #Java #OOPsConcept #ClassAndObject #ManualToAutomation #TestAutomation #LearningJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development