Day 102. Less theory. More breaking things. Yesterday I talked about DAGs and snapshots. Today I actually felt them. Went full practical. Here's what hit me: Error 1: Detached HEAD Checked out an old commit to inspect something. Forgot to create a branch. Made changes. Committed. Then switched branches and watched those commits disappear into the void. The graph doesn't lie. Floating commits with no branch pointing to them just... drift. Error 2: Merge conflict on main Two branches touched the same file. Git couldn't decide whose version wins. The <<<<<<< markers showed up and I actually read them this time instead of panicking. Resolved it manually. Committed the merge. Moved on. Error 3: reset --hard on the wrong branch Yeah. I typed the command on the branch I didn't mean to. Work gone. No staged changes. No warning. Lesson? git reflog saves lives. Every commit still lives in there for 30 days even after a hard reset. Error 4: Pushed to the wrong remote branch Force of habit. Wrong branch name. Had to git push origin --delete and clean it up. Not glamorous. But this is what sharpening actually looks like. Day 102. Still here. #Git #DevOps #100DaysOfCode #LearningInPublic #Infracodebase
Git Errors and Lessons Learned
More Relevant Posts
-
Developer: “Code works on my machine.” Production: “Cool… now watch this 💥” Somewhere between writing code and merging into GitHub, bugs sneak in, security issues hide, and code quality quietly takes a vacation. Manual reviews try their best… but let’s be honest — nobody spots everything (especially before coffee ☕). 🚀 Why Integrating SonarQube with GitHub Matters In today’s fast-paced development world, writing code is just the beginning — maintaining clean, secure, and reliable code is what truly makes a difference. 🔍 So, why integrate SonarQube with GitHub? When SonarQube is connected to your GitHub repositories, it automatically analyzes your code with every pull request or commit. This means issues are caught before they reach production. 💡 Problems it solves: ✅ Code Quality Issues Detects bugs, code smells, and duplication early in the development cycle. 🔐 Security Vulnerabilities Identifies potential security risks and helps developers fix them proactively. 📉 Technical Debt Highlights maintainability issues so teams can avoid long-term complications. 🔁 Manual Code Review Overload Reduces dependency on manual reviews by providing automated insights. 🚦 Quality Gates Ensures only code that meets defined standards gets merged. ⚡ Why it’s important: Promotes a shift-left approach (fix issues early) Improves developer productivity Builds confidence in deployments Encourages a culture of clean coding 👉 In short, integrating SonarQube with GitHub turns code review into a continuous, automated, and intelligent process. #CodeQuality #SonarQube #GitHub #DevOps #CleanCode #SoftwareDevelopment
To view or add a comment, sign in
-
-
A founder sent us his codebase last month for a "small feature addition." The feature took 3 days. Understanding the codebase took 3 weeks. - 0 tests. - 14 copies of the same helper function, each slightly different. - A folder called /old/ with 47,000 lines. Still imported in 9 places. - A cron running every 15 minutes. Nobody knew what it did. We disabled it. A day later, a Slack channel lit up asking why invoices stopped generating. - API keys committed to .env. Git history showed the same key there for 14 months. None of this is unusual. This is what most 3 -year-old production codebases look like on the inside. The previous team had billed serious money over 18 months. Every bug fix spawned two more. The founder blamed the devs. The devs blamed the timeline. Nobody was lying. The real answer: nobody was paid to say "stop, we spend a week cleaning up before we ship anything else." So nobody did. Tech debt compounded faster than the business. If your team has been "almost done" with something for three months, it's probably this. A 30-minute audit will usually tell you whether you need different developers or just a week of breathing room. #AsynxDevs #MVP #CodebaseAudit #Engineering
To view or add a comment, sign in
-
-
Most developers would ignore this. I spent hours fixing it. While working with 20+ repositories in a corporate environment, I hit this: 👉 fatal: repository not found 👉 Git asking for authentication on EVERY pull 👉 Even with correct access, VPN, and PAT At first glance, it looked like a basic Git issue. It wasn’t. 🔍 What was actually happening Git was using multiple credential helpers: system-level → manager user-level → wincred Because of this conflict: Auth succeeded temporarily ✅ Credentials were never persisted ❌ Result → repeated login prompts across all repos 🧠 The real fix (without admin access) Instead of trying random solutions, I traced the root cause → Git config hierarchy. git config --global --unset-all credential.helper git config --global credential.helper store Then authenticated once using PAT. 🚀 Outcome ✔️ Zero authentication prompts ✔️ Works across all repositories ✔️ Stable setup in restricted enterprise environment 💡 What this taught me Most engineering problems are not about code. They’re about: Understanding systems deeply. Debugging under constraints. Finding practical solutions when “ideal” tools aren’t allowed. 👨💻 Why I share this I enjoy solving problems that sit at the intersection of: Backend systems. Developer tooling. Real-world constraints. If you’re building systems where engineers need to think beyond just code, let’s connect 🤝 #backend #softwareengineering #debugging #git #remotework #buildinpublic
To view or add a comment, sign in
-
A SaaS team had a memory leak. Somewhere in the last 300 commits over six weeks. Too many to inspect manually. A senior engineer wrote a 10-line test script and ran: $ git bisect run ./memory_test.sh Git performed binary search automatically. Checked out the midpoint commit. Ran the script. Narrowed the range. Repeated. 40 minutes later: first bad commit found. A dependency update with a caching layer and no eviction policy. The manual version you can use right now: $ git bisect start $ git bisect bad # current commit is broken $ git bisect good v2.3.0 # this version worked Git checks out the midpoint. You test it. Then: $ git bisect good (or bad) Repeat. In 1,000 commits: found in 10 rounds. $ git bisect reset # always run this when done The only requirement: your commits need to be testable in isolation. This is why atomic commits matter. This is why "WIP" commit messages hurt you months later. Every messy commit today is a future investigation hour. 📚 Chapter 7 of Stop Breaking Git. #Git #Debugging #SoftwareEngineering #DevOps #BestPractices
To view or add a comment, sign in
-
What actually happens between git push and your app going live in production? Most engineers know CI/CD as a concept. Very few know exactly what fires, when, and why. Here is the complete flow — step by step: STEP 1 — Code commit Developer pushes to a feature branch. A webhook fires immediately. Your pipeline wakes up. STEP 2 — Build Source code is compiled. Dependencies are resolved. If this fails, nothing else runs. STEP 3 — Unit tests Every test in your codebase runs automatically. One failure stops the pipeline cold. No exceptions. STEP 4 — Static code analysis (SAST) SonarQube scans for bugs, vulnerabilities, and code smells. A quality gate defines what passes. STEP 5 — Docker image build The application and all its dependencies are packaged into an immutable container image. Tagged with the commit SHA. STEP 6 — Image push to registry The image is pushed to ECR or DockerHub. This is the artifact that will be deployed everywhere. STEP 7 — Deploy to staging The new image is automatically deployed to a staging Kubernetes cluster. No manual steps. STEP 8 — Integration tests Tests run against the live staging environment. Real HTTP calls, real database, real assertions. STEP 9 — Approval gate A human reviews and approves. One click. STEP 10 — Production deploy ArgoCD detects the approved image tag in Git and deploys to production. Zero downtime rolling update. Total time: 8–15 minutes. Manual process equivalent: 2–3 hours. #DevOps #CICD #Jenkins #Kubernetes #Automation #SoftwareEngineering #CloudEngineering
To view or add a comment, sign in
-
-
Stop scrolling — your local dev loop is wasting hours, not minutes. You ship features, not manual chores. Yet every day you: - hunt files with slow find, - wrestle with git diffs, - debug Actions by pushing commits, - and run a dozen inefficient CLI steps. Here’s a compact toolkit to cut that friction now. Tools & repos (plug-and-play): - https://github.com/cli/cli — GitHub CLI: open PRs, run checks, and merge from your terminal without the browser detour. - https://lnkd.in/gsjjxvF — act: run GitHub Actions locally to iterate CI fast instead of guessing after pushes. - https://lnkd.in/eAYmxkx — ripgrep (rg): search codebases at native speed; replace slow grep workflows and save minutes per search. - https://lnkd.in/deCEZuAh — fd: human-friendly, blazing-fast alternative to find for quick file discovery in projects. - https://lnkd.in/d9tbxZqw — delta: syntax-highlighted, side-by-side git diffs that make code review and debugging 10x clearer. How I wire them together in 10 minutes: - Use fd + rg to find the failing test and file. - Open the repo with gh issue/pr commands. - Run the exact workflow step locally with act. - Inspect changes with delta before committing. You’ll trade noisy friction for deliberate, fast iterations. What’s the slowest step in your dev loop right now — and which of these would you try first? #DeveloperTools #Automation #GitHub #CLI #DevProductivity #DevOps #OpenSource #Workflow #Tooling #BuildFaster
To view or add a comment, sign in
-
We type git commit thousands of times. It takes half a second. But what actually happens in the machine during that 500 milliseconds? Git is arguably the most irreplaceable tool in modern software development, yet the underlying mechanics are rarely explored. It isn't just a simple timeline of code diffs—it operates as a highly optimized, content-addressable file system. I put together this visual walkthrough of the local Git database to map out exactly how it works under the hood. Think of it as a factory tour of the mechanics we use every single day. Inside the tour: 🏭 The Base Layer: How raw code is compressed, hashed, and stripped of filenames into pure Blobs. 🏭 The Cryptographic Manifest: How Trees act as the structural anchors for the repository. 🏭 The Branch Myth: Exposing how branches are literally just 41-byte text files. 🏭 The Memory Void: What actually happens to "deleted" code during a catastrophic reset --hard. 🏭 The Compactor: How git gc uses reverse-diff delta compression to shrink gigabytes into kilobytes. Once you see the elegance of the Directed Acyclic Graph (DAG), it completely changes how you view a simple file save. #Git #Backend #DevOps #SystemArchitecture #SoftwareEngineering #DeveloperTools #VCS #NotebookLM
To view or add a comment, sign in
-
Git isn’t slow — our understanding of it often is. Recently came across “High Performance Git” by Ted Nyman, and it reframes Git not as a simple VCS, but as a set of layered systems: a content‑addressed database, a filesystem cache, a graph walker, and a transfer protocol — each with its own performance trade‑offs. What stood out to me: - Git slowness usually isn’t accidental — it’s a result of how history traversal, refs, indexes, and packfiles interact at scale - Features like sparse checkout, commit‑graphs, partial clone, and maintenance aren’t “advanced tricks” anymore — they’re table stakes for large repos - Debugging Git performance requires instrumentation and diagnosis, not guesswork or folklore If you work with monorepos, CI pipelines, or large teams, this is a reminder that Git performance is a systems problem, not just a developer gripe. Curious: What’s the most painful Git performance issue you’ve run into at scale — and how did you fix it (or work around it)? 🔗 https://lnkd.in/grY9Xdhw #Git #DeveloperExperience #DevTools #SoftwareEngineering #Monorepo #CI #Productivity
To view or add a comment, sign in
-
A team of twelve agreed in a retrospective to adopt GitHub Flow. No more long-lived feature branches. PRs against main. CI required before merge. They documented it in the team wiki. Two weeks later, a senior developer under pressure pushed directly to main. CI was skipped. A bug was introduced. A month later: a feature branch had been open for three weeks. Nobody had set a maximum lifetime expectation. By month's end: back to the old pattern. The wiki said one thing. The repository enforced nothing. The next sprint, the team enabled branch protection rules: ✔ Direct push to main: disabled for everyone ✔ CI must pass before merge ✔ Minimum 1 reviewer required Within two weeks: the agreed process was the actual process. The lesson: A branching strategy documented in a wiki is an aspiration. A branching strategy enforced by repository configuration is a standard. 15 minutes to configure branch protection. That is the difference between "we agreed to do this" and "we actually do this." Does your team enforce its Git standards in the configuration, or just in the documentation? Does your main branch have branch protection enabled? Y / N. If N, I will walk through setup. #Git #TechLeadership #EngineeringCulture #DevOps #SoftwareEngineering
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development