Don’t break your code during refactoring - there’s a better way. One of my go-to refactoring techniques is Parallel Change. It’s the same concept used in road construction: instead of blocking an entire street until the work is done, you build a detour to keep traffic flowing. Similarly, with Parallel Change, your code continues to function while changes are being implemented. If you’re new to this technique, start small. Practice with simple examples or katas to understand how it works. As you gain confidence, apply it to your day-to-day work - it’s a great way to develop the habit of keeping your code functional throughout the process. When dealing with modernization or legacy projects, this method proves its value even more. It eliminates the headache of fixing broken, non-compiling spaghetti code, allowing you to commit anytime and pause your work without worry. Mastering Parallel Change can make refactoring smoother, safer, and far less stressful. Give it a try - you’ll never want to go back to dealing with broken code.
How to Modify Existing Code Confidently
Explore top LinkedIn content from expert professionals.
Summary
Modifying existing code confidently means making changes to software without breaking its functionality or introducing errors, often by maintaining clarity and testing as you go. This approach helps you update, improve, or fix code while keeping it reliable and easy to work with.
- Start with structure: Organize your code so that related logic is grouped together, making it easier to find, debug, and update as needed.
- Test as you change: Write and run thorough tests before and after you modify code to catch mistakes early and stay sure about how your changes impact the system.
- Plan your updates: Outline your steps and clarify your goals before editing, so each change serves a clear purpose and can be reviewed confidently.
-
-
I'm a Software Engineer at AWS, and here are 18 lessons I learned about refactoring code over the last 7 years in 60 seconds. It took me a lot of mistakes to learn these, so you don't have to: 1/ Never assume how code behaves → verify it with tests before changing anything 2/ Refactor in small, reversible steps → big rewrites break things. 3/ Don't change too much at once → reduce the blast radius 4/ Use AI as a refactoring partner → set guardrails, verify with tests, and iterate in small steps 5/ Test before refactors → they protect behaviour, not implementations. 6/ Keep it simple (KISS) → most complexity is accidental 7/ Fix design problems, not symptoms → good architecture prevents bugs 8/ Keep your code DRY → duplication multiplies risk 9/ Performance matters → refactoring isn't just structure, it's behaviour at scale 10/ Legacy code isn't scary → changing it blindly is 11/ Know your goal before refactoring → clarity beats activity 12/ Readable code beats clever code → readable code is easy to maintain in production 13/ Favour composition over inheritance → inheritance adds more complexity 14/ Patterns aren't always your friend → context matters more than theory 15/ Code is for humans → future readers are part of your system 16/ Refactoring is a habit → it's how systems stay healthy over time and avoid "broken windows" 17/ Messy code is a liability → technical debt compounds quietly. 18/ Refactor the code you touch most → optimise for where teams spend time. P.S. What else would you add? --- 🔖 Save this for the next time you're about to "just clean it up" ➕ Follow Abdirahman Jama for practical software engineering tips. #softwareengineering
-
A single 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file just hit 15K GitHub stars (derived from Karpathy's coding rules) Andrej Karpathy pointed out that LLMs make wrong assumptions, overcomplicate code, and touch things they shouldn't. Predictable mistakes, every single time. forrestchang took those observations and turned them into four behavioral principles inside one markdown file you drop into any Claude Code project. Here's what the repo actually solves: 1/ Think Before Coding LLMs love to pick an interpretation silently and just run with it. This principle forces Claude to state assumptions explicitly, present multiple interpretations when ambiguity exists, and push back when a simpler approach is available. No more guessing on your behalf. 2/ Simplicity First If your AI assistant writes 1000 lines when 100 would do... that's not help. This blocks speculative features, prevents abstractions for single-use code, and kills "flexibility" or "configurability" that nobody requested. The test: would a senior engineer call this overcomplicated? If yes, simplify. 3/ Surgical Changes When editing existing code, Claude won't "improve" adjacent code or reformat things that aren't broken. It matches existing style even if it would do things differently. If your changes create orphaned imports or dead functions, clean those up. But pre-existing dead code? Flag it, don't delete it. 4/ Goal-Driven Execution Instead of vague instructions like "add validation," it transforms tasks into verifiable goals: "write tests for invalid inputs, then make them pass." Every multi-step task gets a plan with verification checkpoints. Step, verify, repeat. The install takes seconds. Works as a Claude Code plugin or a per-project CLAUDE.md that merges with your existing rules. You know it's working when diffs get smaller, clarifying questions come before implementation, and PRs stop carrying drive-by refactors. Context engineering for AI coding is becoming its own discipline. This repo is proof that the best fix for LLM behavior isn't a better model... it's better instructions. Link in the first comment.
-
🚨 When transformation logic is spread all over the repository, it becomes a nightmare to modify, debug, and test. This scattered approach leads to duplicated code, inconsistencies, and a significant increase in maintenance time. Developers waste precious hours searching for where transformations occur, leading to frustration and decreased productivity. 🔮 Imagine having a single place to check for each column's transformation logic—everything is colocated and organized. This setup makes it quick to debug, simple to modify, and easy to maintain. No more digging through multiple files or functions; you know exactly where to go to understand or change how data is transformed. 🔧 The solution is to create one function per column and write extensive tests for each function. 👇 1. One Function Per Column: By encapsulating all transformation logic for a specific column into a single function, you achieve modularity and clarity. Each function becomes the authoritative source for how a column is transformed, making it easy to locate and update logic without unintended side effects elsewhere in the codebase. 2. Extensive Tests for Each Function: Writing thorough tests ensures that each transformation works as intended and continues to do so as the code evolves. Tests help catch bugs early, provide documentation for how the function should behave, and give you confidence when making changes. By organizing your code with dedicated functions and supporting them with robust tests, you create a codebase that's easier to work with, more reliable, and ready to scale. --- Transform your codebase into a well-organized, efficient machine. Embrace modular functions and comprehensive testing for faster development and happier developers. #CodeQuality #SoftwareEngineering #BestPractices #CleanCode #Testing #dataengineering
-
The “pls fix” loop is not a feature. It is a trap. I love tools like Cursor, Lovable, and Replit. The meme nails the feeling. You type a prompt, watch code appear, and get a semi-working app. Not perfect? Prompt again. New UI. New function. Another dopamine hit. That is the danger. This loop feels like progress, but each prompt sneaks in a bit of debt. Patterns drift. Assumptions change. Now you are crawling in the sand asking “pls fix” while your architecture gasps for air. How I avoid it: - Start with a plan - Add structure and context - Then follow the code line by line If the output breaks your plan, do not ship it. Edit the plan, then re-prompt with context. Do not let the tool set your architecture. I also add guardrails so Cursor slows down and thinks before it floods my repo. Here is a simple rule file you can drop into your next Cursor project. ``` --- description: Ask For Edit Permission globs: ["**/*.ts", "**/*.tsx"] alwaysApply: true --- Before you are allowed to edit ANY code, you must complete the following: 1. Recommend a few ideas for how to solve the issue, and rate each recommendation by your confidence. Briefly explain the reasoning and why the user might choose each option. 2. Ask the user up to 10 clarifying questions which help avoid vagueness, account for edge cases, and avoid assumptions. 3. Complete a development checklist which outlines how the fix or feature will be implemented. 4. Ask for explicit permission to edit code, and only proceed when you receive a response of "yes you may proceed". ``` This tiny speed bump kills the dopamine loop. It forces options, questions, and a checklist before any edits. Where this really shines is when you pair it with context-first planning. I learned the hard way that AI needs context, not just prompts. That’s why I built Precursor, a planning IDE that helps you think before you code. It transforms messy thoughts into structured plans and guardrails so when you do prompt Cursor, the output aligns with your architecture. The result: less chaos, more clarity, and apps that actually scale. 🚨 Truth Time: Do you ACTUALLY review every AI generated line of code? #AIEngineering #Cursor #SoftwareDesign #TechnicalDebt #DevProductivity
-
Replacing an old feature with a new one sounds simple. It rarely is. Most teams try to “just swap it out”. That’s how you end up with broken builds, messy rollbacks, and reviewers drowning in 2,000-line PRs. Here’s a cleaner way to do it. Do it in phases. 👇 PR 1: Rename the old feature to old-x. Mark it deprecated. PR 2: Create the new feature under the original name. PR 3+: Migrate one surface area at a time. Small. Controlled. Boring. ✅ Why this works: Each PR is focused and easy to review. PR 1 is 100% safe. It’s just a type-safe rename. PR 2 is also safe. You’re only adding code. You free up the correct name immediately, so the new feature doesn’t launch with a temporary or awkward label. You never have to rename things later. And your team can migrate gradually without a “big bang” release. This approach does something underrated: 💡 It protects review quality. Big rewrites look impressive. Small migrations ship. Most engineering pain isn’t about writing code. It’s about managing change. Ship change in slices, not shocks. Most refactors fail because the change is too big. Make it boring and you’ll make it successful. —— 👋 Join 29,000+ software engineers who learn how to write better code: https://thetshaped.dev/ ——— 💾 Save this for later. ♻ Repost to help others find it. ➕ Follow Petar Ivanov + turn on notifications.
-
“𝗝𝘂𝘀𝘁 𝗮𝗱𝗱 𝗮 𝘀𝘄𝗶𝘁𝗰𝗵 𝘀𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁” 𝗧𝗵𝗮𝘁’𝘀 𝗻𝗼𝘁 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲. 𝗜𝘁’𝘀 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗱𝗲𝗯𝘁. 𝗧𝗵𝗲 𝗢𝗽𝗲𝗻/𝗖𝗹𝗼𝘀𝗲𝗱 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲 𝗲𝘅𝗶𝘀𝘁𝘀 𝗳𝗼𝗿 𝗮 𝗿𝗲𝗮𝘀𝗼𝗻 👉 Let’s talk about something 𝗺𝗮𝗻𝘆 𝗦𝗮𝗹𝗲𝘀𝗳𝗼𝗿𝗰𝗲 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 overlook when building scalable features: extensibility. You’ve built a feature. It works. Business loves it. A month later, a new requirement arrives - and you’re back inside the same class, adding more if/else/switch logic. What began as a clean solution becomes a bowl of spaghetti. The root cause? Your code was designed to be 𝗰𝗵𝗮𝗻𝗴𝗲𝗱 - not 𝗲𝘅𝘁𝗲𝗻𝗱𝗲𝗱. Here’s why: The Open/Closed Principle (from the SOLID Principles) states that 𝙨𝙤𝙛𝙩𝙬𝙖𝙧𝙚 𝙨𝙝𝙤𝙪𝙡𝙙 𝙗𝙚 𝙤𝙥𝙚𝙣 𝙛𝙤𝙧 𝙚𝙭𝙩𝙚𝙣𝙨𝙞𝙤𝙣 𝙗𝙪𝙩 𝙘𝙡𝙤𝙨𝙚𝙙 𝙛𝙤𝙧 𝙢𝙤𝙙𝙞𝙛𝙞𝙘𝙖𝙩𝙞𝙤𝙣. It helps avoid ripple effects every time a new requirement shows up. Instead of hard-coding every case, imagine building behaviour through abstraction: Strategy patterns, interfaces, dynamic flows, handler registries, and metadata-driven decisions. Key Benefits of using OCP in Apex: ✅ Clean Separation: Responsibilities are isolated ✅ Lower Risk: Legacy functionality remains untouched ✅ Fewer Merge Conflicts: No need to modify existing logic ✅ Testability: Each extension can be tested independently ✅ Reusability: Same abstractions support future use cases ✅ Easy Scalability: Add new features without touching old code The Reality: We often treat Apex classes as one-size-fits-all containers. But without extensibility, we trade short-term wins for long-term maintenance nightmares. Best Practices for OCP in Apex: 🔹 Avoid business logic in conditionals—delegate behaviour 🔹 Use metadata or object config to influence behaviour 🔹 Embrace the Strategy or Factory pattern where needed 🔹 Design classes to depend on abstractions 🔹 Think extension before modification Ready to level up? Here's what you need to focus on: → Learn the Open/Closed Principle inside-out → Reflect on where you're violating it today → Refactor your next feature using abstraction → Study common extension strategies in Apex → Share and document reusable patterns with your team What’s your biggest challenge in applying the Open/Closed Principle in Apex? Do you usually extend or modify existing logic in your org? 👇 Let’s discuss in the comments below
-
Even after twenty years of coding professionally, I constantly find refinements and new patterns. In just the last few months, I figured out how to write tests to make cleaning up dark-launch feature flags graceful: 1. Add a context where the feature-flag is “off”. 2. Copy-paste the existing tests under that new context 2. Remove any tests that should no longer pass after the feature flag launches from the outer context 3. Initialize the feature flag to “on” for all other test. 4. Go ahead and start implementation by adding your first test for the new behavior in the top-level context. Now, as long as the tests are green and you don’t change the pre-existing tests you won’t break the existing contract when the feature flag rolls out. If the flag rolls out, the cleanup for the feature flag is now graceful: 1. Delete the test context where the feature flag is “off” 2. Delete the conditional logic (conveniently, that will be the bits of the code the tests no longer run) 3. Delete the method that checks the feature flag This way, even your test clean up can be TDDed! If you instead need to remove the new logic & a simple revert is insufficient, there are a few extra steps: 1. Delete all the tests for the unwanted behavior that should never happen again 2. Initialize the feature flag to false for the entire test 3. Inline the now-redundant “when TheFeatureFlag is off” context 4. Remove any duplicate tests 5. Delete the conditional logic 6. Delete the code that checks the feature The problem with nesting both the old and new behavior under their own contexts is that it produces diffs with a bunch of indentation changes, making it harder for readers to verify the change. Additionally, it means that unexpected changes to other parts of the contract could go unnoticed, because the unit’s other tests are only run in one of the two states. In this case, using copy-paste leads to more graceful code, rather than less.
-
AI code assistants are evolving fast, but most engineering teams still treat them like magic wands. Recently, Vals AI tested popular models to evaluate their performance on prompt-to-feature coding. The best performer, GPT‑5.1, only built accurate features 24.6% of the time. That’s a red flag if your workflow looks like: Prompt → PR → Production. What’s safer? A contract-first, test-gated development process. Here’s how it works: → Start with a contract. Before any code is written, define the spec: inputs, outputs, error states, constraints, edge cases. The goal is clarity - what the feature must do, and what must never happen. → Turn that contract into tests. Write unit and integration tests first. If the prompt is vague, the AI’s job isn’t to code; it’s to surface missing questions and propose tests that clarify intent. → Limit AI to diff-bounded changes. Don’t ask it to generate an entire feature. Ask it to patch small, reviewable parts of existing code. Smaller diffs reduce risk and make hallucinations easier to spot. → Automate the gates. CI should enforce contract tests, linters, SAST, type checks, and secret scans. If changes touch data, auth, or external calls, require a short security note: what data moves, what trust boundary is crossed, and how failure is handled. → Keep human ownership. No code gets merged without someone verifying that the intent is fulfilled. The reviewer must check that the tests pass, the contract is honored, and no new behavior is introduced by accident. The author must be able to explain the change, not blame the model. This approach turns vibe coding from “prompt and hope” into an auditable, controlled workflow. Speed is still there; AI drafts fast. But correctness is enforced up front. When the model is wrong 75% of the time, your development process has to catch "wrong" early, loudly, and cheaply. How is your team approaching this shift? Are you gating AI-written code behind contracts yet? Let’s discuss.
-
How to strangle a legacy system into submission Step-by-step instructions... A strangling-fig strategy, is one way to break a massive code base into autonomous pieces. This strategy is based on the idea of reimplementing specific business capabilities, outside of the existing code base, yet in a backward compatible way, and gradually stop using the old codebase over time. I recommend to start with a read-only capability, something like product inquiry or a pricing engine. Subscribe this new capability to relevant state changes in the old system (using either Change Data Capture, or messaging) and copy the required state locally in a database of choice. Plug the new capability into the user interface, in such a way that both the old queries and new ones are running in parallel. Don't show the results of the new queries yet. Swallow and report any exceptions that might occur in the new queries. The end user should not notice any difference at this point. Also build data quality controls into the system, which compare the results from the old queries to the new. Report any differences to your monitoring system. Investigate and correct any data quality issues that you will find until at least all differences are explained. It will never match 100% as there will be bugs in the old system that you don't want to fix. Once the data of the new system has sufficient quality, use a gradual release mechanism, such as contextual feature flags, to show the data of the new queries to a larger subset of the users. Keep the old queries running and keep comparing the results for quite some time. Next you move on to a sibiling capability, e.g. any of the management processes that feed into (or off of) the new query model. For example product definition or the pricing policy. Build a completely new capability to support the selected process, and make sure any events occuring in this capability get projected into both the recently built query model as well as the old system. Do not create a new big ball of mud by putting the first and second capability together again, keep them apart! At this point you can consider building a data quality report on the old system's database, to detect any differences between data entered via the old route vs the soon to be released process. Now plug the new management capability into the user interface next to the old one, and gradually release it. Monitor the quality of the output in the database, rollback and fix if needed. Repeat these steps until all business capabilities are supported by new, autonomous, software. Then start the decomissioning process of the old codebase. I recommend to do this gradually as well, use a contextual feature flag to gradually disable the old queries for more and more people until you have proof that no one is calling them anymore. Then delete the related code. I've recently used this strategy to replace a very large online reservation system without any downtime.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development