🚨 "Make the code Generic and Reusable" is often just a developer's most expensive vanity project. We've all seen it: A DataService that implements an IDataService…which is only ever implemented by one DataService. Why? "In future users might ask for data to be retrieved from another source." Spoiler: You almost never swap it out. You just added 2x the files to track, 2x the boilerplate to maintain and a layer of 'indirection' that makes debugging a nightmare for the next person. In my experience, more projects die from 'Premature Abstraction' than from 'Messy Code.' Over-engineering isn't just a design choice, it's a velocity killer. The real cost of 'In Future This Might Come' Abstractions: 'Jump to Definition' Tax: Spending minutes clicking through multiple interfaces just to find the actual logic. Cognitive Load: New joiners spend weeks trying to map the 'Architecture' instead of delivering features. 'Ghost' Hierarchy: We maintain complex Base Classes for scenarios that don't exist yet, while the real users wait for actual features past deadlines. My Senior Developer 'YAGNI' Rules: ✅ The Power of 'Concrete': If there is only one implementation, you don't need an Interface. Delete it. ✅ Duplicate > Abstract: Follow the 'Rule of Three.' Don't abstract it until you've repeated the logic at least three times. ✅ Interface for 'Behavior,' not 'Structure': Only abstract when you actually have two different behaviors to switch between. Building for 'future requirements' is Guessing. Building for 'current requirements' is Engineering. Be an Engineer, not a Guesser. 💬 What's the most 'over-engineered' abstraction you've ever had to rip out? Let's share the war stories below.👇 #SoftwareArchitecture #CleanCode #TypeScript #SeniorEngineer #TechLeadership #ProgrammingTips #OverEngineering
Rethink Premature Abstraction in Code
More Relevant Posts
-
The leaked Claude Code source ended up becoming one of the most interesting documentation experiments I’ve done in a while. I started with a simple goal: make the codebase easier to learn. That turned into: documenting 1,902 source files creating 301 directory READMEs writing 9 deep-dive architecture guides generating ~2,212 total docs reaching 100% source file coverage My biggest takeaway: the hard part was not whether the model could understand the code. The hard part was structuring the work well. Splitting the repo into bounded chunks, running agents in parallel, and checking output between batches made a huge difference. Also, precision up front matters. I was not specific enough initially about naming conventions for generated docs, and that created unnecessary cleanup later. Small ambiguity early turns into bigger coordination problems once multiple agents are involved. This project left me thinking that one of the most practical uses for coding agents is not just writing code — it is making complex systems easier to learn. Check it out on GitHub: https://lnkd.in/e-hdA_gz #SoftwareEngineering #AI #Documentation #DeveloperTools #ClaudeCode #LLM
To view or add a comment, sign in
-
𝗛𝗼𝘄 𝗘𝟮𝗘 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 𝗧𝗮𝘂𝗴𝗵𝘁 𝗠𝗲 𝗠𝗼𝗿𝗲 𝗔𝗯𝗼𝘂𝘁 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝗢𝗢𝗣 𝗧𝗵𝗮𝗻 𝗜 𝗘𝘅𝗽𝗲𝗰𝘁𝗲𝗱 💡 𝗦𝗼𝗺𝗲𝘁𝗶𝗺𝗲𝘀 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗹𝗲𝘀𝘀𝗼𝗻𝘀 𝗵𝗶𝗱𝗲 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗿𝗲𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝘁𝗮𝘀𝗸𝘀. A few years ago, I was asked to rebuild the end-to-end test coverage for a large application. The existing tests had been thrown together by someone non-technical: helpers everywhere, inconsistent naming, DRY violations - it was chaos. My teammate and I started by refactoring what we had, trying to preserve the structure. 𝗕𝘂𝘁 𝘁𝗵𝗲 𝗱𝗲𝗲𝗽𝗲𝗿 𝘄𝗲 𝘄𝗲𝗻𝘁, 𝘁𝗵𝗲 𝗺𝗼𝗿𝗲 𝗼𝗯𝘃𝗶𝗼𝘂𝘀 𝗶𝘁 𝗯𝗲𝗰𝗮𝗺𝗲: our helper functions were piling up, the folder structure made navigation painful, and similar test suites were duplicating logic in slightly different ways. Even when we tried to share utilities, scoping issues and exports turned into a maintenance headache. 🤯 The turning point came in a conversation with a team lead on a sister project. When I laid out our problems, he asked a simple question: “𝗛𝗮𝘃𝗲 𝘆𝗼𝘂 𝘁𝗿𝗶𝗲𝗱 𝘁𝗵𝗲 𝗣𝗮𝗴𝗲 𝗢𝗯𝗷𝗲𝗰𝘁 𝗽𝗮𝘁𝘁𝗲𝗿𝗻?” That question changed everything. Embracing that pattern highlighted the power of good OO design: 🗂️ 𝗪𝗿𝗮𝗽 𝗱𝗮𝘁𝗮 𝗮𝗻𝗱 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 𝘁𝗼𝗴𝗲𝘁𝗵𝗲𝗿 𝘀𝗼 𝗲𝗮𝗰𝗵 𝗽𝗮𝗴𝗲 𝗺𝗮𝗻𝗮𝗴𝗲𝘀 𝗶𝘁𝘀 𝗼𝘄𝗻 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻𝘀. ⚖️ 𝗞𝗲𝗲𝗽 𝗰𝗼𝘂𝗽𝗹𝗶𝗻𝗴 𝗹𝗼𝘄 𝗮𝗻𝗱 𝗰𝗼𝗵𝗲𝘀𝗶𝗼𝗻 𝗵𝗶𝗴𝗵 𝗯𝘆 𝗲𝘅𝗽𝗼𝘀𝗶𝗻𝗴 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀. 🔁 𝗥𝗲𝘂𝘀𝗲 𝗹𝗼𝗴𝗶𝗰 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁𝗹𝘆, 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗼𝗹𝗹𝘂𝘁𝗶𝗻𝗴 𝗻𝗮𝗺𝗲𝘀𝗽𝗮𝗰𝗲𝘀 𝗼𝗿 𝗿𝗲𝗽𝗲𝗮𝘁𝗶𝗻𝗴 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳. Once we switched to page objects, our test code became cleaner, easier to maintain, and faster to extend. Coverage moved forward smoothly, and our deployments got more predictable. Monotonous tasks might feel tedious, but they can sharpen your understanding of architecture if you stay curious. In this case, E2E testing didn’t just improve quality - it reshaped how I think about structuring software. 🚀 P.S comic sans enjoyers will appreciate the image 🤣 #FrontendEngineering #OOP #SoftwareArchitecture #E2ETesting
To view or add a comment, sign in
-
-
Simple and Effective Strategies to Avoid Sloppy Code Over time I've realized that sloppy code rarely comes from lack of intelligence. It usually comes from speed, fatigue, and unclear thinking. Here are a few simple strategies that consistently help me write cleaner code. 1. Slow down before you start typing Most messy code happens when we start coding before we fully understand the problem. Spend 2–3 minutes thinking about: - What are the inputs? - What is the output? - What edge cases exist? A short pause saves a lot of refactoring later. 2. Name things like you would explain them to a junior Bad names create confusing code. Instead of: let x = getData(); Prefer: const userProfile = fetchUserProfile(); Good names eliminate the need for comments. 3. One function → one responsibility If a function is: - fetching data - transforming data - updating UI …it’s probably doing too much. Break it into smaller units. 4. Delete clever code If code feels too smart, it’s probably hard to maintain. Prefer boring code that is easy to read. Future you will thank present you. 5. Read your code once before committing A quick 30-second review catches: - unnecessary variables - bad naming - duplicated logic This simple habit dramatically improves code quality. 6. Think in data flow Clean code often follows a simple pipeline: Input → Transform → Output If the flow is easy to follow, the code is easy to maintain. --- Great engineers are rarely the ones writing the most code. They are the ones writing the clearest code. Clarity scales. Cleverness doesn’t. Follow Mayank N. for more such technical insights. :D #technology #computerscience #coding #softwareengineering #development
To view or add a comment, sign in
-
📋 **Before installing any third-party memory tool for Claude — read this** Anthropic's official memory guide for Claude Code covers everything built in, for free, with zero setup beyond a text file. ***Here's what's already available out of the box*** 📄 CLAUDE.md — a plain text file you write once. Claude reads it at the start of every session: our stack, conventions, rules, architecture decisions. Commit it to git and the whole team shares it automatically. 🤖 Auto memory — Claude writes its own notes. Build commands, debugging patterns, your preferences. No prompting needed, it just learns as you work. 📁 File hierarchy — project, user, and org-wide scopes. One file for personal preferences, one for the team, one for the whole company. Each level layered cleanly on top of the next. 🗂️ Rules per file type — scope instructions to TypeScript files, API handlers, or any path pattern. The right context loads only when Claude needs it. Most teams reach for plugins before reading this page. That's the wrong order. ***Start here first*** 📰 https://lnkd.in/gWSBEiPc #AI #ClaudeCode #Anthropic #DeveloperTools #EngineeringTeams
To view or add a comment, sign in
-
Why I decided to "complicate" the architecture to simplify the future. 🧠 Whenever I start a new module, there's a temptation: write the logic as quickly as possible and deliver it. But over time, I've learned that rushing at the beginning creates "rigid code" later on. This week, I focused on applying Clean Architecture and rigorously isolating Use Cases. Why did I do this? As a developer, my goal is not just to "make it work," but to ensure that the code is maintainable. By isolating business logic (such as phone validation or user creation) from external technologies (database, third-party APIs): 1️⃣ I gained speed in testing: I don't need a running database to know if my business rule is correct. 2️⃣ I protected the project: If tomorrow we need to change the infrastructure, the "intelligence" of the software remains intact. 3️⃣ I made maintenance easier: Finding a bug or adding a new rule takes minutes, not hours hunting through giant files. Developing with design patterns might seem like overkill on day one, but it's what allows the system to grow without becoming a headache. This week's lesson is: Less "code that works" and more "engineering that scales". So, do you prefer the more direct approach, or do you invest time in architecture from day one? 👇 #CleanArchitecture #SoftwareEngineering #TypeScript #WebDev #TechnicalLearning #TechInsights #SoftwareDevelopment
To view or add a comment, sign in
-
-
The Claude Code source code leaked yesterday. I spent hours reading all 11 layers of architecture while it was up so you don't have to. Buried in the thousands of lines of code was a humbling realization: I’ve been using this tool completely wrong. And statistically, you probably are too. Most of us open it, type a prompt, wait for a response, and type another. Here is the reality: 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗮 𝗰𝗵𝗮𝘁 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁 𝘄𝗶𝘁𝗵 𝘁𝗲𝗿𝗺𝗶𝗻𝗮𝗹 𝗮𝗰𝗰𝗲𝘀𝘀. 𝗜𝘁 𝗶𝘀 𝗮𝗻 𝗮𝗴𝗲𝗻𝘁 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. After digging through the repo, here are the 3 most critical insights that will immediately change how you engineer: 𝟭. 𝗬𝗼𝘂𝗿 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 𝗶𝘀 𝗿𝗲-𝗿𝗲𝗮𝗱 𝗲𝘃𝗲𝗿𝘆 𝘀𝗶𝗻𝗴𝗹𝗲 𝘁𝘂𝗿𝗻 Most developers leave this blank or use 200 characters. You are allocated 40,000. Put your architecture decisions, naming conventions, and "never do this" rules here. This is the highest-leverage configuration in the codebase to make the AI understand your specific repo. 𝟮. 𝗙𝗶𝘃𝗲 𝗮𝗴𝗲𝗻𝘁𝘀 𝗰𝗼𝘀𝘁 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗮𝘀 𝗼𝗻𝗲 When Claude forks a subagent, it creates a byte-identical copy of the parent context. The API caches this. You can spin up 5 agents simultaneously, one for a security audit, one refactoring, one testing, and share the cache. Using it single-threaded is a massive waste of its capability. 𝟯. 𝗧𝗵𝗲𝗿𝗲 𝗮𝗿𝗲 𝟮𝟱+ 𝗵𝗶𝗱𝗱𝗲𝗻 𝗟𝗶𝗳𝗲𝗰𝘆𝗰𝗹𝗲 𝗛𝗼𝗼𝗸𝘀 You can intercept the pipeline at will. Imagine automatically attaching your latest test results or recent git diffs to every prompt without typing a single word. That is the power of the UserPromptSubmit hook. The developers getting 10x output aren't writing magically better prompts. They are configuring, parallelizing, and hooking into the architecture. Stop starting from scratch every session. Use --continue. Build your context. Have you set up your local CLAUDE.md file yet, or are you still relying on manual, zero-shot prompting? -- Post inspired by various X articles during yesterday's havoc.
To view or add a comment, sign in
-
-
I kept having the same annoying experience with Claude Code: every new session felt like working with a smart engineer with amnesia. The model could reason about the code, but it kept spending too much time rediscovering the same repository structure, dependencies, conventions, and previous decisions. After a while, this started to feel less like a prompting problem and more like a workflow-state problem. At some point I stopped asking: “how do I write a better prompt?” The better question was: what should survive between stateless agent sessions? That question became Quoin, a small tool I’ve been building around Claude Code. The core idea is to move from a prompt-centric workflow to an artifact-centric workflow. Instead of relying on one huge CLAUDE.md or a long conversation history, Quoin keeps lightweight workflow state in files: architecture notes, plans, critic responses, reviews, memory, cost snapshots, and a documentation cache. The most useful part so far has been the codebase knowledge cache. In my own workflow, it reduced input tokens by around 47%, mostly by cutting repeated orientation. This is obviously not a universal benchmark — just one workflow and one setup — but it was enough to make the direction feel worth exploring. What I find more interesting is not the token reduction itself, but what it suggests about agent workflows. A lot of coding-agent workflows today are still very human-language-first. We ask models to write long plans, long summaries, long reviews, and long explanations. They often look impressive, but many are too verbose to be operationally useful. They become another object the model has to reread, compress, reinterpret, and eventually forget. I’m increasingly convinced that some workflow artifacts should be more machine-first: structured state, constraints, file maps, dependency notes, checklists, decisions, and failure modes. Human-readable views are still important, but probably only where human judgment is actually needed. I wrote the longer build log here: https://lnkd.in/dXE4eYcW And the repo is here: https://lnkd.in/dEP_Vim8 Would be very interested in feedback from people using Claude Code, Codex, Cursor, or similar tools on larger or multi-repo projects. The question I’m trying to answer is not “can the model code?” It can. The harder question is: how do we make the next session smarter than the previous one?
To view or add a comment, sign in
-
-
GO 1.23 PUSHES CALLBACK-BASED ITERATORS — AND IT'S A CONTROVERSIAL CHOICE The Go standard library has had callback-style iteration for a while — sync.Map, fs.WalkDir, and friends. Go 1.23 essentially formalizes this pattern via iter.Seq. But there's a real tension here. Go is an intentionally imperative, explicit language. Developers expect to own the control flow — not hand it off to a callback and hope everything behaves. That's why the explicit iterator object pattern feels far more natural in Go: .Iter() → for it.Next() → .Key() / .Value() This isn't a new idea — it's already proven itself in the stdlib: bufio.Scanner sql.Rows Both are simple, predictable, and free of hidden behavior. Compare that to the callback approach, where: Control flow becomes non-linear defer behaves unexpectedly Early returns turn into a puzzle Panics can get swallowed silently Many teams deliberately choose the iterator object pattern as their internal standard for exactly these reasons. One example: Solod — a Go subset that compiles to C — uses iterator objects by design. And honestly, it feels much closer to the spirit of Go. The real question isn't which approach is "newer" — it's which one keeps your code honest: tight execution control, or flexible composition through callbacks?
To view or add a comment, sign in
-
-
13+ Coding Theory Tools in One Dashboard! 🧑🎓 This is the first of two posts about my latest project. Today, I’m sharing the results; in the next one, I’ll dive into the DevOps infrastructure that keeps it running with zero manual effort. I’ve built an interactive Coding Theory Toolbox featuring 13+ algorithms, including: - Error Correction: Elias Code, Companion Codes, Hamming logic. - Data Compression: Huffman, Shannon-Fano, Entropy analysis. - Data Encoding: BCD (with custom weights), Gray Code, and more. Whether it's visualising bit-flips in a parity matrix or calculating information entropy, I wanted to turn dry theory into a hands-on developer experience. ✨ Check it out live: https://lnkd.in/dhnhzzG8 💻 Explore the code: https://lnkd.in/deHuVVku Stay tuned for Part 2, where I’ll share how I automated the infrastructure! #SoftwareEngineering #CodingTheory #OpenSource #WebDev
To view or add a comment, sign in
-
-
When Code Is Blind – Why Metrics See More Than the Eye Imagine a developer who cannot see the code. For this person, the visual structure – the indentation, the colour coding, the elegant arrangement of brackets – is irrelevant. What matters is the logical depth, the complexity of dependencies and the predictability of the data flow. It is precisely this perspective that reveals a radical truth about legacy code: often, ‘healthy’ code is perceived as such simply because it looks visually appealing. Yet behind a clean surface, deep technical debt may be lurking, which only becomes visible through quantitative analysis. This is where Scitools’ Understand comes in. Whilst the human eye quickly tires when analysing millions of lines of legacy code, Understand provides an objective, data-driven diagnosis. It translates the code into measurable metrics that are independent of the visual representation: • Zyklomatic complexity: Identifies branching paths that are difficult for any developer – sighted or otherwise – to test and maintain. • Coupling and cohesion: Highlights how heavily modules depend on one another, often where no direct visual connection is apparent. • Code metrics over time: Tracks how the ‘health’ of the code has evolved over the years, long before a critical error occurs. The practical approach Instead of planning a massive refactoring, Understand allows you to take a targeted approach: 1. Create a baseline – Measure the current state of the codebase 2. Identify hotspots – Where is the risk highest? 3. Targeted improvements – Don’t tackle everything at once; address the most critical areas first 4. Track progress – Measure after every sprint: Are the metrics moving in the right direction? Key takeaway: Legacy code is not a fate – it is a state that can be quantified and systematically improved. The first step is not refactoring, but measurement. For legacy systems, this approach is essential. Visual refactoring alone is not enough to stabilise the underlying architecture. A deep analysis using tools such as Understand forces teams to focus on the actual structure, not just the surface. The lesson is clear: code health cannot be determined simply by looking at it. It requires a measurement that goes deeper than what appears on the screen. Free trial www.emenda.com/trial #LegacyCode #SoftwareArchitecture #CodeQuality #ScitoolsUnderstand #DeveloperTools #Refactoring #TechDebt
To view or add a comment, sign in
-
Explore related topics
- Building Clean Code Habits for Developers
- Improving Code Clarity for Senior Developers
- SOLID Principles for Junior Developers
- Coding Best Practices to Reduce Developer Mistakes
- Keeping Code DRY: Don't Repeat Yourself
- Idiomatic Coding Practices for Software Developers
- Clean Code Practices for Scalable Software Development
- Applying Abstraction in Enterprise Codebases
- Principles of Elegant Code for Developers
- Writing Elegant Code for Software Engineers
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
A common justification for these 'interfaces' is mocking for Unit Tests. But if our architecture is driven solely by a test runner, it's a workaround, not a design. Modern frameworks (Jest, Jasmine, Mockito) handle concrete classes perfectly. We should be building for our users, not our tools. Is "mocking" a valid excuse for such architecture, or are we just over-complicating simple implementations?