🚀 Day 28/30 FastAPI: 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗻𝗴 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝘄𝗶𝘁𝗵 𝗚𝗶𝘁𝗛𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 Manual testing is a bottleneck. Today, I moved to the “Auto-pilot” phase of development by setting up a 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲. ✅ What I Learned Today: 𝗚𝗶𝘁𝗛𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀: I learned how to write 𝗬𝗔𝗠𝗟 files that tell 𝗚𝗶𝘁𝗛𝘂𝗯 exactly how to build and test my application. 𝘂𝘃 𝗶𝗻 𝘁𝗵𝗲 𝗖𝗹𝗼𝘂𝗱: I integrated the 𝘀𝗲𝘁𝘂𝗽-𝘂𝘃 𝗮𝗰𝘁𝗶𝗼𝗻, which makes my CI pipeline incredibly fast compared to traditional 𝗽𝗶𝗽-𝗯𝗮𝘀𝗲𝗱 setups. ⚡ 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: Now, every 𝘗𝘶𝘭𝘭 𝘙𝘦𝘲𝘶𝘦𝘴t is automatically checked. If a test fails, I can’t merge the code. This is how you maintain high-quality software at scale. 𝗟𝗶𝗻𝘁𝗶𝗻𝗴 & 𝗙𝗼𝗿𝗺𝗮𝘁𝘁𝗶𝗻𝗴: I added a “𝗥𝘂𝗳𝗳” check to the pipeline to ensure my code style stays consistent and professional automatically. 🛠️ The Tech Progress: My workflow is now “Push and Forget.” GitHub acts as my 24/7 quality assurance team. We are only 2 days away from the finish line! More detailed and technical article on my blog 𝗕𝗲𝘆𝗼𝗻𝗱𝟰𝟬𝟬: https://lnkd.in/eiUUnydC #FastAPI #GitHubActions #CICD #Python #30DaysOfCode #DevOps #Automation #uv #BackendDevelopment #WebDevelopment #LearningInPublic
Automating Quality with GitHub Actions and CI Pipeline
More Relevant Posts
-
Day 31/100 – #100DaysOfLeetCode 🚀 🧩 Problem: LeetCode 78 – Subsets (Medium) 🧠 Approach: For each element, choose whether to include or exclude it in the current subset, and explore both possibilities recursively. 💻 Solution: class Solution: def subsetRecur(self, i, arr, res, subset): if i == len(arr): res.append(list(subset)) return subset.append(arr[i]) self.subsetRecur(i + 1, arr, res, subset) subset.pop() self.subsetRecur(i + 1, arr, res, subset) def subsets(self, nums: List[int]) -> List[List[int]]: subset = [] res = [] self.subsetRecur(0, nums, res, subset) return res ⏱ Time | Space: O(2^n) | O(n) 📌 Key Takeaway: Backtracking works by making a choice, exploring it, and undoing it, making it ideal for problems involving subsets, combinations, and permutations. #leetcode #dsa #development #problemSolving #CodingChallenge
To view or add a comment, sign in
-
-
Building small tools is often the fastest way to understand how software behaves under constraints. Recently, I’ve been working on CHECK, a CLI-based task management tool developed as a hands-on exploration of structured state handling, persistence, and testable application logic. The project emphasizes fundamentals that tend to emerge as complexity increases: modeling real-world entities as code, validating user input consistently, separating concerns across modules, and ensuring expected behavior through automated tests. CHECK currently supports features such as JSON-based persistence, regex-driven search, and defensive handling of invalid usage scenarios, all intentionally implemented in a simple, inspectable codebase. This project has been a valuable exercise in thinking about software not just as “code that works,” but as a system that remains understandable, testable, and maintainable as it evolves. I’ll continue iterating on it as I deepen my software engineering skill set and explore different design trade-offs. #softwareengineering #softwaredevelopment #python #cli #taskmanagement #todolist #productivitytools #systemsengineering #cleanarchitecture #testing #learningbybuilding
To view or add a comment, sign in
-
SWE-bench Lite — The Fast Lane Scenarios: When teams need quick feedback while building coding agents When researchers want cheap, repeatable evaluations When iterating daily without running massive benchmarks Definition: SWE-bench Lite is a 300-task curated subset of the original SWE-bench benchmark, designed to evaluate real software engineering ability faster and cheaper, while still using real GitHub issues and codebases. Analogy: Full SWE-bench is a full marathon. Verified is a championship race with strict judges. Lite is a time-trial track — shorter, faster, but still demanding skill. Real-Time Example: An agent is tested on 300 real GitHub issues from Django, scikit-learn, pytest, etc., producing code patches that must pass FAIL_TO_PASS unit tests — but the entire run completes in hours instead of days. Conditions & Usage: Dataset size: 300 tasks (+ ~23 dev split) Built via heuristic filtering, not human validation Removes brittle test patterns (e.g., exact error-message matching) Covers 11 of the 12 original Python repositories Typically 5–10× faster to evaluate than full SWE-bench Dos and Don'ts: Do use Lite for rapid prototyping and ablation studies Do use it for CI-style regression checks on agents Don’t claim true SOTA using Lite alone Don’t compare Lite scores directly to Verified scores Cheat Sheet: Lite → 300 tasks Verified → 500 human-validated tasks Full → 2,294 raw tasks Lite difficulty → medium Lite goal → speed Tips & Tricks: Run Lite weekly; run Verified monthly Track trend improvements, not absolute percentages Use Lite to debug agent failures before expensive runs Memory Trick: “Lite = iterate fast, learn fast” Questions and Answers: Q: Is SWE-bench Lite easier than Verified? A: Yes — it filters out many noisy cases and skews slightly easier. Q: Why still use Lite in 2026? A: Because it gives realistic signal at a fraction of the cost and time. key learning s: SWE-bench Lite enables fast, affordable evaluation Uses real repos and real tests, not synthetic tasks Not human-verified, but far cleaner than the full set Ideal for development loops, not final claims Hashtags: #SWEBenchLite, #CodingAI, #AgenticAI, #LLMBenchmarks, #AIForDevelopers, #OpenSourceAI, #SoftwareEngineering, #FutureOfAI
To view or add a comment, sign in
-
https://huesnatch.com/ 🔍 𝗗𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴: "𝗠𝘆 𝗖𝗼𝗱𝗲" 𝘃𝘀 "𝗦𝗼𝗺𝗲𝗼𝗻𝗲 𝗘𝗹𝘀𝗲’𝘀 𝗖𝗼𝗱𝗲" Debugging your own code feels harder for a weird reason: the brain remembers the intent and auto-fills the gaps. 🧠✨ So issues hide behind assumptions like "this part is fine" and "it worked yesterday." Debugging someone else’s code is different: no context, no attachment just pure evidence mode. 🕵️♂️📌 Logs, inputs/outputs, edge cases, repeat. A few habits that level up both: ✅ Reproduce first (don’t guess) ✅ Add a tiny failing test before changing logic 🧪 ✅ Trace data flow: input → transform → output 🔁 ✅ Change one thing at a time (then re-run) ✅ Write the fix and the explanation (why it happened) 📝 Pro tip: When stuck on personal code, read it like a stranger wrote it, start at the bug report, not the "idea." 😄 💬 What’s harder: debugging your own code or inheriting legacy code? 💾 Save this for later 🔁 Repost if it helped ➕ Follow for more practical dev tips + humor #SoftwareEngineering #Debugging #CleanCode #CodeReview #DeveloperMindset #Programming #Backend #Frontend #Testing #SystemDesign #huesnatch #huesnatch.com
To view or add a comment, sign in
-
-
Learning of the Day #01 Uncle Enzzo’s lesson: Less code, more interaction The most dangerous phrase in tech isn’t a destructive command. It’s: “it works on my machine.” A comfortable local environment is an elegant trap. It carries invisible context, pre-resolved dependencies, small tweaks that were never formalized. Everything looks stable until someone else presses “run” somewhere else. That’s when the truth shows up: CI. The pipeline rebuilds everything from scratch. It has no emotional attachment to your setup. If it fails there, local success is irrelevant. There’s a gap between “the code is ready” and “the software is ready.” Code is text. Software is context. Context of dependencies. Context of packaging. Context of environment. Context of shared understanding. Bugs are rarely syntax. They are usually misalignment. Misalignment between what you believe the system does and what it actually does outside your laptop. Less isolated code. More systemic interaction. An issue is alignment, not a task. A PR is technical discussion, not a formality. A lockfile is a contract, not a detail. CI is a referee, not bureaucracy. Robust projects don’t depend on a developer’s memory. They depend on deterministic processes. Is your software predictable in any environment… or just comfortable in yours? Less assumption. More interaction. #SoftwareEngineering #Python #DevOps #CI #CleanCode #CodeQuality #BackendDevelopment #CleanArchitecture #TechMindset
To view or add a comment, sign in
-
-
Another Week, Another Project This week, I built an AI Git Merge Conflict Resolver, CLI tool that automatically detects and resolves Git merge conflicts using GitHub Copilot CLI. Merge conflicts are one of the most frustrating parts of collaborative development. Instead of manually fixing <<<<<<< HEAD blocks, this tool: ✅ Detects conflicted files ✅ Extracts conflict blocks ✅ Uses AI (GitHub Copilot CLI) to generate intelligent resolutions ✅ Provides a preview mode before committing ✅ Auto-commits resolved files with a summary Goal: Make developer workflows faster, smarter, and less painful. Tech Stack : Python Git CLI GitHub Copilot CLI Subprocess automation 💡 What I Learned How Git internally tracks unmerged files How to integrate AI into real CLI developer tools Handling subprocess calls and CLI error management If you want to learn more about project Blog : https://lnkd.in/g9-Gejrb Github : https://lnkd.in/g76ccBhR Feedback is always welcome. I’d love to hear your thoughts or suggestions! #Python #Git #AI #DeveloperTools #GitHubCopilot #BuildInPublic #MachineLearning #CLI #Hackathon
To view or add a comment, sign in
-
-
Vibe coded — but not the “prompt and pray” kind. I didn’t start by writing automation code. I started by designing a 𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞𝐝 𝐩𝐫𝐨𝐦𝐩𝐭 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞 so the AI could build it consistently: Phase 0: Rules set - Defined the contracts the AI must follow: requirements, manual test cases, traceability, naming/IDs - Set repo-level Copilot rules (so outputs stay consistent and reviewable) - Enforced “𝐧𝐨 𝐡𝐚𝐫𝐝𝐜𝐨𝐝𝐞𝐝 𝐬𝐞𝐥𝐞𝐜𝐭𝐨𝐫𝐬” (selectors live in a single place) Phase 1: Analyze - Prompted the AI to explore the app and capture verified flows, routes, validations, and locators Phase 2: Arrange - Prompted it to convert findings into clean “𝐬𝐨𝐮𝐫𝐜𝐞-𝐨𝐟-𝐭𝐫𝐮𝐭𝐡” artifacts (docs + context) so the suite stays maintainable Phase 3: Act - Prompted it to generate the Playwright (Python) + pytest framework from those artifacts (POM + thin tests + failure artifacts) Phase 4: Improve - Prompted it to add reliability upgrades like fallback locator strategies - Plus 𝐀𝐈 𝐥𝐨𝐜𝐚𝐭𝐨𝐫 𝐡𝐞𝐚𝐥𝐢𝐧𝐠: when a locator breaks, the suite generates a healing report to suggest what likely changed and what to try next Takeaway: AI moves fast — but 𝐜𝐨𝐧𝐭𝐫𝐚𝐜𝐭𝐬 + 𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 keep it from moving fast in the wrong direction. Resources - GitHub repo: https://lnkd.in/gsUnuDeY - Course: “AI-Driven Test Automation: Playwright, Selenium, LLMs & More” — Karthik KK #VibeCoding #Playwright #Python #pytest #TestAutomation #QAEngineering #AIinTesting #GitHubCopilot
To view or add a comment, sign in
-
I spent 3 weeks refactoring a codebase. Then I built code-surgeon. Now I do it in 3 hours. Here's what happened: For years, every feature implementation felt the same: - Hours reading requirements - Days understanding the codebase - More time planning than actually coding - Hoping I didn't miss a dependency - Discovering breaking changes mid-project I realized the bottleneck wasn't the coding. It was the *planning*. So I built code-surgeon. It's a Claude skill that transforms GitHub issues → comprehensive implementation plans with "surgical prompts" (precise, file-by-file instructions for every change). What it does: ✓ Analyzes your requirement in seconds ✓ Maps your entire codebase (pattern detection, dependency graphs) ✓ Generates a step-by-step plan with breaking change analysis ✓ Supports 35+ frameworks (React, Django, Express, Rails, Go, Rust...) ✓ Creates surgical prompts ready to hand to Claude or any AI agent ✓ 3 depth modes: QUICK (5 min), STANDARD (15 min), DEEP (30 min) The impact: Engineers using code-surgeon reported: - 70% less time spent on code planning - Breaking changes caught *before* implementation - Multi-file refactoring in hours instead of days - Better code reviews (surgical prompts enforce team patterns) Try it now: 🔗 skills.sh: https://lnkd.in/gvpceKVk 🔗 GitHub: https://lnkd.in/gbvkrzSs 📚 Docs & Examples: https://lnkd.in/gYZ-abFW ⚡ Install: npx skills add baagad-ai/code-surgeon What's your biggest bottleneck in feature implementation? Is it planning, understanding legacy code, or managing breaking changes? Comment below, I'd love to hear what problem would hit you hardest. #DevTools #CodingAI #OpenSource #ImplementationPlanning #skills #claude #antigravity #agenticskills
To view or add a comment, sign in
-
-
🛠️ Stop Guessing, Start Reading: The "Documentation-First" Approach We’ve all been there: Spending two hours wrestling with a bug, only to find the solution in a single line of documentation we skipped. Trial and error is a slow teacher. If you want to write cleaner, bug-free code, you need to master the art of Documentation-First Debugging. Here is my framework for extracting the most value from docs without getting lost in the weeds: 1️⃣ The "Skim & Zoom" Technique Don't read docs like a novel. The Skim: Glace at the Table of Contents and headers to understand the "mental model" of the library. The Zoom: Once you find your function, deep-dive into the Parameter Descriptions. Look specifically for default values and edge cases—that’s usually where bugs hide. 2️⃣ Master the Search Bar 🔍 Stop scrolling manually. Use precise keywords: Searching for a specific tool? Use the exact name (e.g., .rename()). Searching for a solution? Use descriptive actions (e.g., "change column names pandas"). Pro Tip: If the internal search is weak, use Google site-search: site:pandas.pydata.org "rename columns". 3️⃣ Deconstruct the Examples Code snippets are the " Rosetta Stone" of documentation. When you see an example, don't just copy-paste. Ask yourself: 1. What exactly is the input shape? 2. Which optional parameters are being ignored? 3. What does the returned object look like? (Is it a new copy or a view?) 💡 The Bottom Line Well-structured documentation isn't just a manual; it’s a roadmap. Adopting a documentation-first approach reduces frustration and ensures you make informed decisions before the first line of code is even written. Are you a "read the manual" developer or a "try it until it works" developer? Let's be honest! 👇 #SoftwareEngineering #CodingTips #CleanCode #Programming #DeveloperProductivity #Python #WebDev
To view or add a comment, sign in
-
Day 181: Of Loops, Laps, and Mirror Images Day 181, and the complexity is ramping up! Today was all about optimising Linked List patterns—moving away from "easy-but-heavy" memory usage toward "lean-and-mean" $O(1)$ space complexity. 🕵️♂️ Detecting the Infinite Loop (LeetCode 141) How do you know if a list goes on forever? The "Set" Approach: Carry a notebook (Set) and write down every node you visit. If you see a name twice, you’re in a loop. Simple, but it costs $O(n)$ space. Floyd’s Cycle-Finding Algorithm: The "Tortoise and the Hare." You put two pointers on the track. If there’s a loop, the fast pointer (the Hare) will eventually "lap" the slow pointer (the Tortoise) and they’ll meet. It’s $O(1)$ space magic. 🪞 The Palindrome Puzzle (LeetCode 234) Checking if a list is the same forward and backward (like 1 -> 2 -> 2 -> 1) is trickier than it looks without an array. The Pro Move: 1. Use Slow/Fast pointers to find the exact middle. 2. Reverse the entire second half of the list. 3. Walk through both halves simultaneously and compare values. By "re-wiring" the second half, I can check for symmetry without creating a single extra piece of data. This brings the space complexity down from $O(n)$ to $O(1)$. 💡 Key Takeaway Coding isn't just about solving the problem; it's about solving it efficiently. Today taught me that the most elegant solutions often involve manipulating the structure you already have rather than building new ones. #JavaScript #DSA #LeetCode #SoftwareEngineering #WebDev #Algorithms #DataStructures #CodingJourney #Optimization #CleanCode #LinkedList #ProblemSolving #TechSkills #CareerGrowth #ProgrammerLife #EngineeringExcellence
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development