Best Programming Practices for Clean Code

Explore top LinkedIn content from expert professionals.

  • View profile for Cole Medin

    Technology Leader and Entrepreneur | AI Educator & Content Creator | Founder of Dynamous AI

    8,765 followers

    Karpathy's LLM Knowledge Bases post went viral this week, and rightfully so. The idea is simple: raw documents go in, an LLM processes them into a structured wiki, your agent queries that wiki at runtime. No fancy RAG pipeline, no vector database. Just compiled knowledge your agent can navigate. Everyone is applying this to external data: docs, papers, research articles. I went a different direction. The raw material in my version isn't articles from the web. It's Claude Code session logs. Every time I work on the codebase, hooks automatically capture what got built, what decisions were made, what didn't work and why. A daily flush script compiles those logs into wiki articles in my Obsidian vault. When I start a new session, the agent searches that wiki before writing a single line of code. The result feels different from a good CLAUDE.md. It's not just static documentation! It's a living record of every architectural decision, every "we tried X and it broke because Y." Institutional memory, but searchable. The loop compounds quickly. Ask a question, the agent finds a relevant wiki article from three weeks ago, gives a better answer, and that answer eventually feeds back into the wiki. The longer you use it, the more context the agent has about your codebase specifically (not codebases in general, yours). Setup is one prompt into Claude Code: hooks, daily flush script, wiki structure, all generated automatically. Karpathy's insight was "stop RAG-ing raw documents, start compiling them." Most developers are losing context between every session. All that institutional knowledge evaporates. Compiling your session logs applies the same idea one level closer to home. I just posted a full breakdown on YouTube with the complete architecture walkthrough and a live demo of setting up the whole system. Link to my GitHub repo in the replies too!

  • View profile for Rajya Vardhan Mishra

    Engineering Leader @ Google | Mentored 300+ Software Engineers | Building High-Performance Teams | Tech Speaker | Led $1B+ programs | Cornell University | Lifelong Learner | My Views != Employer’s Views

    114,161 followers

    I am an Engineering Manager working at Google with almost 20 years of experience. If I could sit down with a Jr. Software Engineer, here are 11 good pieces of advice I would tell them that I learned through my experiences… 1// If your app only serves around 10 users, a single server and a basic REST API will do the job. But if you’re handling 10 million requests a day, you need to start thinking about load balancers, autoscaling, and rate limiting. 2// If only one developer is building features, you can skip the ceremonies and just ship and test manually. But if you have 10 developers pushing code daily, it’s time to invest in CI/CD pipelines, multiple testing layers, and feature flags. 3// If a bit of downtime just breaks a single page, adding a banner and moving on is usually enough. But if downtime kills a key business flow, redundancy, health checks, and graceful fallbacks are absolutely necessary. 4// If you’re just consuming APIs, make sure you know how to handle errors like 400s and 500s. If you’re building APIs for others, you need to version them, document everything, test thoroughly, and set up proper monitoring. 5// If your product can tolerate a few seconds of lag, always pick code clarity over squeezing out a little more performance. But if users are waiting on every click, profiling, caching, and edge delivery need to become a part of your daily work. 6// If your data easily fits in RAM, keep things simple and store it in memory using maps. But if your data spans terabytes, you have to start thinking about indexing, partitioning, and optimizing for disk access patterns. 7// If you’re coding alone, poor naming might just annoy you. But in a growing team, bad names become a ticking time bomb for everyone. 8// If you’re only fixing bugs once a week, basic logs and console prints are probably enough. But when you’re running production systems, you need structured logs, tracing, real-time alerts, and dashboards. 9// If you’re up against tight deadlines, write the simplest code that gets things working. But if the code is meant to last, focus on readability, thorough testing, and making it easy to change in the future. 10// If you’re working alone, “it works on my machine” might be good enough. But in a real team, reproducible builds and shared development setups are the bare minimum. 11// If your app is new, move fast and don’t worry too much about cleaning up right away. But once your app is stuck in maintenance hell, you’ll pay the price for every rushed decision you made in the past. People think software engineering is just about building things. It’s really about: – Knowing when not to build – Being okay with deleting good code – Balancing tradeoffs without always having all the data The best engineers don’t just ship fast. They build systems that are safe to move fast on top of.

  • View profile for Jamil Farshchi
    Jamil Farshchi Jamil Farshchi is an Influencer

    Equifax CTO • UKG Board Member • FBI Strategic Advisor • LinkedIn Top Voice in Innovation and Technology

    44,772 followers

    AI didn't create a new problem. It put a price tag on an old one. Every company has a Dave. Nine years in. Knows where the bodies are buried. Knows which service breaks if you breathe on it wrong. Dave IS the documentation. But AI can't ask Dave. We've learned this lesson three times now. Security: you can't protect what you can't see. Cloud: you can't migrate what you don't understand. AI: you can't automate what you haven't documented. Same lesson. Same boring work nobody ever did. Three times. Every piece of knowledge in Dave's head instead of the repo is context your AI tools will never have. Without it, they guess. Confidently. At scale. And it's not just code. It’s the helpdesk KB article nobody has touched since 2019. It’s the IR runbook you promised yourself you'd write after the last 2am P1... but forgot to. The AI isn't failing. We're giving it garbage, but expecting gold. Coding tools. Contact Center. Workflows. Decision logic. Same pattern: undocumented, outdated, contradictory, tribal. It’s like doing a new hire eval with no training… and blaming the new hire for poor performance. The open-source community figured it out: 60,000+ projects ship standardized context files so every AI tool knows how to work in that codebase. No tool selection pit fights. No governance pitfalls.  Here's the thing: the documentation security teams requested, the architecture maps cloud needed, and the context AI requires? It’s the same work. Not similar. The same. Close it and everything compounds. Security gets visibility. AI performs. New engineers ramp faster. And you stop being one Dave-retirement away from a knowledge crisis. Documentation debt is now performance debt. The teams pulling ahead right now aren't the ones with the best AI tools. They're the ones that finally wrote stuff down. #TheBoringWork #CTO #Cybersecurity #EngineeringLeadership

  • No, you won't be vibe coding your way to production. Not if you prioritise quality, safety, security, and long-term maintainability at scale. Recently coined by former OpenAI co-founder Andrej Karpathy, "vibe coding" describes an AI-coding approach where developers focus on iterative prompt refinement to generate desired output, with minimal concern for the LLM-generated code implementation. At Canva, our assessment — based on extensive and ongoing evaluation of AI coding assistants — is that these tools must be carefully supervised by skilled engineers, particularly for production tasks. Engineers need to guide, assess, correct, and ultimately own the output as if they had written every line themselves. Our experimentation consistently reveals errors in tool-generated code ranging from superficial (style inconsistencies) to dangerous (incorrect, insecure, or non-performant code). Our engineering culture is built on code ownership and peer review. Rather than challenging these principles, our adoption of AI coding assistants has reinforced their importance. We've implemented a strict "human in the loop" approach that maintains rigorous peer review and meaningful code ownership of AI-generated code. Vibe coding presents significant risks for production engineering: - Short-term: Introduction of defects and security vulnerabilities - Medium to long-term: Compromised maintainability, increased technical debt, and reduced system understandability From a cultural perspective, vibe coding directly undermines peer review processes. Generating vast amounts of code from single prompts effectively DoS attacks reviewers, overwhelming their capacity for meaningful assessment. Currently we see one narrow use case where vibe coding is exciting: spikes, proofs of concept, and prototypes. These are always throwaway code. LLM-assisted generation offers enormous value in rapidly testing and validating ideas with implementations we will ultimately discard. With rapidly expanding LLM capabilities and context windows, we continuously reassess our trust in LLM output. However, we maintain that skilled engineers play a critical role in guiding, assessing, and owning tool output as an immutable principle of sound software engineering.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,748 followers

    Clean code isn't just about readability —it's about creating maintainable, scalable solutions that stand the test of time. When we prioritize readability, simplicity, and thoughtful architecture, we're not just making our lives easier; we're creating value for our teams and organizations. A few principles that have made the most significant difference in my work over years: • Meaningful naming that reveals intent • Functions that do one thing exceptionally well • Tests that serve as documentation and safety nets • Consistent formatting that reduces cognitive load The greatest insight I've gained is that clean code is fundamentally an act of communication—with future developers, our teammates, and even our future selves. The time invested upfront pays dividends during maintenance, debugging, and onboarding. What clean code practices have transformed your development experience? I'd love to hear about the principles that guide your work. Image Credit - Keivan Damirchi

  • View profile for Rahul Agarwal

    Staff ML Engineer | Meta, Roku, Walmart | 1:1 @ topmate.io/MLwhiz

    45,182 followers

    Few Lessons from Deploying and Using LLMs in Production Deploying LLMs can feel like hiring a hyperactive genius intern—they dazzle users while potentially draining your API budget. Here are some insights I’ve gathered: 1. “Cheap” is a Lie You Tell Yourself: Cloud costs per call may seem low, but the overall expense of an LLM-based system can skyrocket. Fixes: - Cache repetitive queries: Users ask the same thing at least 100x/day - Gatekeep: Use cheap classifiers (BERT) to filter “easy” requests. Let LLMs handle only the complex 10% and your current systems handle the remaining 90%. - Quantize your models: Shrink LLMs to run on cheaper hardware without massive accuracy drops - Asynchronously build your caches — Pre-generate common responses before they’re requested or gracefully fail the first time a query comes and cache for the next time. 2. Guard Against Model Hallucinations: Sometimes, models express answers with such confidence that distinguishing fact from fiction becomes challenging, even for human reviewers. Fixes: - Use RAG - Just a fancy way of saying to provide your model the knowledge it requires in the prompt itself by querying some database based on semantic matches with the query. - Guardrails: Validate outputs using regex or cross-encoders to establish a clear decision boundary between the query and the LLM’s response. 3. The best LLM is often a discriminative model: You don’t always need a full LLM. Consider knowledge distillation: use a large LLM to label your data and then train a smaller, discriminative model that performs similarly at a much lower cost. 4. It's not about the model, it is about the data on which it is trained: A smaller LLM might struggle with specialized domain data—that’s normal. Fine-tune your model on your specific data set by starting with parameter-efficient methods (like LoRA or Adapters) and using synthetic data generation to bootstrap training. 5. Prompts are the new Features: Prompts are the new features in your system. Version them, run A/B tests, and continuously refine using online experiments. Consider bandit algorithms to automatically promote the best-performing variants. What do you think? Have I missed anything? I’d love to hear your “I survived LLM prod” stories in the comments!

  • View profile for Murray Robinson

    Removing barriers and building capability to achieve results

    13,231 followers

    As a client project manager, I consistently found that offshore software development teams from major providers like Infosys, Accenture, IBM, and others delivered software that failed 1/3rd of our UAT tests after the provider's independent dedicated QA teams passed it. And when we got a fix back, it failed at the same rate, meaning some features cycled through Dev/QA/UAT ten times before they worked. I got to know some of the onshore technical leaders from these companies well enough for them to tell me confidentially that we were getting such poor quality because the offshore teams were full of junior developers who didn't know what they were doing and didn't use any modern software engineering practices like Test Driven Development. And their dedicated QA teams couldn't prevent these quality issues because they were full of junior testers who didn't know what they were doing, didn't automate tests and were ordered to test and pass everything quickly to avoid falling behind schedule. So, poor quality development and QA practices were built into the system development process, and independent QA teams didn't fix it. Independent dedicated QA teams are an outdated and costly approach to quality. It's like a car factory that consistently produces defect-ridden vehicles only to disassemble and fix them later. Instead of testing and fixing features at the end, we should build quality into the process from the start. Modern engineering teams do this by working in cross-functional teams. Teams that use test-driven development approaches to define testable requirements and continuously review, test, and integrate their work. This allows them to catch and address issues early, resulting in faster, more efficient, and higher-quality development. In modern engineering teams, QA specialists are quality champions. Their expertise strengthens the team’s ability to build robust systems, ensuring quality is integral to how the product is built from the outset. The old model, where testing is done after development, belongs in the past. Today, quality is everyone’s responsibility—not through role dilution but through shared accountability, collaboration, and modern engineering practices.

  • View profile for Munna PraWiN

    Author, AI as a Partner | Product & Digital Health Leader | Delivering Tailored, Scalable Solutions for Startups 🇵🇸🕊🇺🇦

    30,658 followers

    High-quality code makes your work short-lived. Poorly written code ensures the company will always need your help. 😜 Funny — yet many people still follow this mindset. Here’s the hard truth: Across my career, from freshers to senior leaders, I’ve seen professionals who deliberately complicate work, avoid documentation, refuse to share knowledge, and quietly build a dependency around themselves. It’s not incompetence — it’s strategy. A strategy that slows teams down, breeds silos, and creates a dangerous single point of failure. And while it may offer short-term “job security,” it kills long-term team health, innovation, and trust. For leaders, these situations are the most challenging because the person often looks productive on the surface. But behind the scenes, the team becomes fragile, and delivery risks multiply. In engineering, we avoid single points of failure in systems. We should avoid them in people too. 💡 Hard-Hitting Tips for Leaders to Fix This 1️⃣ Make knowledge sharing non-negotiable Mandate documentation, code reviews, and walkthroughs. If knowledge lives only in someone’s head, that’s a risk — not a strength. 2️⃣ Remove dependency incentives Reward collaboration, not silo-building. Make team outcomes matter more than individual heroics. 3️⃣ Rotate responsibilities Let others touch the “critical” areas. If someone resists, that’s a red flag — not loyalty. 4️⃣ Build a culture where transparency is expected Open communication, shared ownership, and regular alignments reduce the power of hidden information. 5️⃣ Address the behaviour early Silence is approval. The longer you let it grow, the harder it becomes to fix. 6️⃣ Make it safe for others to speak Often the team knows who the blocker is — but they need psychological safety to raise concerns. 7️⃣ Lead by example Leaders who share knowledge freely create teams that do the same. Healthy teams grow when knowledge flows. Strong leaders rise when they dismantle silos. And real progress happens only when success is shared — not hoarded. #Leadership #TeamWork #EngineeringCulture #TechLeadership #TeamDynamics #OrgCulture #KnowledgeSharing #GrowthMindset #PeopleManagement #LeadershipTips #CriticalResource #SoftwareEngineering #MunnaPrawin #BUMI #SmartLife

  • View profile for Milan Jovanović
    Milan Jovanović Milan Jovanović is an Influencer

    Practical .NET and Software Architecture Tips | Microsoft MVP

    276,614 followers

    Still dealing with anemic domain models? Let’s fix that. In many legacy C# codebases, you’ll find services that do everything: pricing, validation, stock checks. All while your entities are just data bags. It works... until it doesn’t. Imagine this instead: 1. Start by pushing one business rule into your domain class (stock check, discount logic, or credit limit). 2. Encapsulate internal collections and hide constructors to protect invariants. 3. Wrap domain operations behind expressive methods like SendInvitation(...) or Order.Create(...). 4. The application layer then becomes pure orchestration: load entity, call its behavior, save. With each small refactor, your domain becomes richer, tests cleaner, and code more resilient. No big rewrite needed. Want to see the full step-by-step refactor? Check out this article: https://lnkd.in/erhP4tNs Every pattern and tool has a purpose. It's up to you to understand when to use it. P.S. If you want a structure and detailed (multi-hour) guide to applying DDD in practice, I think you'll enjoy this: https://lnkd.in/eMyRkwcK

  • View profile for Kasra Jadid Haghighi

    Senior software developer & architect | Follow me If you want to enjoy life as a software developer

    230,749 followers

    Best Practices for Writing Clean and Maintainable Code One of the worst headaches is trying to understand and work with poorly written code, especially when the logic isn’t clear. Writing clean, maintainable, and testable code—and adhering to design patterns and principles—is a must in today’s fast-paced development environment. Here are a few strategies to help you achieve this: 1. Choose Meaningful Names: Opt for descriptive names for your variables, functions, and classes to make your code more intuitive and accessible. 2. Maintain Consistent Naming Conventions: Stick to a uniform naming style (camelCase, snake_case, etc.) across your project for consistency and clarity. 3. Embrace Modularity: Break down complex tasks into smaller, reusable modules or functions. This makes both debugging and testing more manageable. 4. Comment and Document Wisely: Even if your code is clear, thoughtful comments and documentation can provide helpful context, especially for new team members. 5. Simplicity Over Complexity: Keep your code straightforward to enhance readability and reduce the likelihood of bugs. 6. Leverage Version Control: Utilize tools like Git to manage changes, collaborate seamlessly, and maintain a history of your code. 7. Refactor Regularly: Continuously review and refine your code to remove redundancies and improve structure without altering functionality. 8. Follow SOLID Principles & Design Patterns: Applying SOLID principles and well-established design patterns ensures your code is scalable, adaptable, and easy to extend over time. 9. Test Your Code: Write unit and integration tests to ensure reliability and make future maintenance easier. Incorporating these tips into your development routine will lead to code that’s easier to understand, collaborate on, and improve. #CleanCode #SoftwareEngineering #CodingBestPractices #CodeQuality #DevTips

Explore categories