Common Anti-Patterns in Software Development

Explore top LinkedIn content from expert professionals.

Summary

Common anti-patterns in software development are recurring mistakes or poor practices that can cause projects to become unreliable, difficult to maintain, or even fail over time. These anti-patterns appear in everything from coding, microservices, data pipelines, Spring Boot, to web APIs—and often go unnoticed until they lead to bigger issues.

  • Spot duplication early: Regularly review your codebase to catch repeated logic, utilities, or configuration, and consolidate them to simplify maintenance.
  • Keep boundaries clear: Define clear responsibilities and ownership for services, pipelines, and APIs so that problems don’t fall through the cracks and everyone knows who is accountable.
  • Organize and validate: Use proper input validation, configuration management, and monitoring tools to ensure your software is secure, reliable, and easy to troubleshoot.
Summarized by AI based on LinkedIn member posts
  • View profile for Raman Walia

    Software Engineer at Meta | Follow for content on Software Engineering, Interview Prep and Dev Productivity

    36,077 followers

    10 AI coding anti-patterns every software engineer should understand and know (I wrote this after generating and reviewing over 100k+ LOC over the past 1 year with different models) AI coding agents are fast, but they make the same categories of mistakes over and over. Here are 10 patterns I have seen repeatedly occur: [1] Duplication AI does not search your codebase before writing code. If your shared utility lives three directories away, it does not exist as far as the agent is concerned. You end up with four implementations of the same thing in the same week. [2] Abstraction Bypass Even when shared infrastructure exists, the agent reaches for the raw library instead. It will use httpx.AsyncClient directly when your project has a BaseHTTPClient wrapper sitting right there with logging, retries, and auth baked in. [3] Error Handling Gaps AI loves the happy path. Bare except clauses that swallow everything, missing finally blocks, catch-all handlers that log and move on when the correct behavior is to propagate. Error handling gaps show up nearly 2x more in AI code. [4] Type Safety Violations When the agent cannot figure out the correct type, it reaches for `any` and moves on. The code compiles, the linter passes, and three weeks later, you get a runtime error that nobody can trace. [5] Security Anti-Patterns SQL string interpolation instead of parameterized queries. Hardcoded secrets in source files. Missing input validation on API endpoints. 36-40% of AI-generated code contains at least one security vulnerability. [6] Dead Code and Over-Engineering AI generates defensively, unused imports, abstractions for things with one implementation, configuration systems for values that never change. It builds for hypothetical scenarios nobody asked for. [7] Debugging Residue AI agents work in a try-fail-retry loop and leave the old files behind. You end up with [auth.py](http://auth.py), auth_[v2.py](http://v2.py), auth_[new.py](http://new.py) in the same directory. The agent works forward and never cleans up. [8] Async Misuse Blocking calls inside async functions, missing awaits on coroutines, synchronous I/O in event loops. These bugs pass linting and type checking and only surface under load when it is too late. [9] Deprecated API Usage AI models are trained on historical code and do not distinguish current from deprecated. You will see datetime.utcnow() which was deprecated in Python 3.12, pkg_resources instead of importlib, and React class components instead of hooks. [10] Fake Test Coverage AI produces test suites that hit high coverage numbers and pass CI. But the tests validate the AI's own assumptions, not your intent. They mock so heavily they test nothing real and snapshot whatever the agent generated as "correct." The good news is that these mistakes are predictable, which means they are preventable. Linters, pre-commit hooks, and targeted code review catch most of them before they reach production.

  • View profile for Rocky Bhatia

    400K+ Engineers | Architect @ Adobe | GenAI & Systems at Scale

    214,755 followers

    Microservices don’t fail because they’re complex, they fail because teams repeat the same patterns without noticing. The architecture looks great on diagrams, but the real issues show up in production: tight coupling, retry storms, missing timeouts, shared databases, and services nobody actually owns. This breakdown highlights 16 anti-patterns that quietly slow teams down and how to fix them before they turn into outages. A quick look at what goes wrong: ‣ 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝗱 𝗺𝗼𝗻𝗼𝗹𝗶𝘁𝗵𝘀 create hidden coupling even when services look separate. ‣ 𝗢𝘃𝗲𝗿𝘀𝗽𝗹𝗶𝘁𝘁𝗶𝗻𝗴 turns every feature into its own service and adds chaos. ‣ 𝗪𝗿𝗼𝗻𝗴 𝗯𝗼𝘂𝗻𝗱𝗮𝗿𝗶𝗲𝘀 appear when services follow technical layers instead of real business domains. ‣ 𝗦𝗵𝗮𝗿𝗲𝗱 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀 make deployments explode across teams. ‣ 𝗦𝘆𝗻𝗰𝗵𝗿𝗼𝗻𝗼𝘂𝘀 𝗰𝗵𝗮𝗶𝗻𝘀 increase latency and failure heat. ‣ 𝗖𝗵𝗮𝘁𝘁𝘆 𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀 multiply internal calls and hurt performance. ‣ 𝗚𝗼𝗱 𝗴𝗮𝘁𝗲𝘄𝗮𝘆𝘀 take over business logic and become new monoliths. ‣ 𝗡𝗼 𝘃𝗲𝗿𝘀𝗶𝗼𝗻𝗶𝗻𝗴 breaks clients overnight. ‣ 𝗥𝗲𝘁𝗿𝘆 𝘀𝘁𝗼𝗿𝗺𝘀 turn small outages into large ones. ‣ 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝘁𝗶𝗺𝗲𝗼𝘂𝘁𝘀 block threads and trigger cascades. ‣ 𝗜𝗴𝗻𝗼𝗿𝗶𝗻𝗴 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 makes debugging nearly impossible. ‣ 𝗡𝗼 𝗼𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 leaves services drifting with no accountability. ‣ 𝗠𝗮𝗻𝘂𝗮𝗹 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 slow down releases and increase risk. ‣ 𝗡𝗼 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 strategy breaks distributed workflows. ‣ 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 as an afterthought exposes internal APIs to misuse. Microservices don’t need more services, they need better discipline. Fix the anti-patterns early, and the architecture finally works the way it was meant to.

  • View profile for Umair Ahmad

    Senior Data & Technology Leader | Omni-Retail Commerce Architect | Digital Transformation & Growth Strategist | Leading High-Performance Teams, Driving Impact

    11,161 followers

    → 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐀𝐧𝐭𝐢 𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬 𝐓𝐡𝐚𝐭 𝐐𝐮𝐢𝐞𝐭𝐥𝐲 𝐁𝐫𝐞𝐚𝐤 𝐌𝐨𝐝𝐞𝐫𝐧 𝐒𝐲𝐬𝐭𝐞𝐦𝐬 Most microservices failures do not begin with outages. They begin with design choices that look harmless at first. Until scale exposes them. • 𝐒𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐭𝐢𝐠𝐡𝐭𝐥𝐲 𝐜𝐨𝐮𝐩𝐥𝐞𝐝 Boundaries are weak. Teams lose deployment independence. One change starts impacting everything else. • 𝐃𝐢𝐬𝐭𝐫𝐢𝐛𝐮𝐭𝐞𝐝 𝐦𝐨𝐧𝐨𝐥𝐢𝐭𝐡 The system looks distributed on paper. In reality, services cannot evolve or deploy without depending on one another. • 𝐍𝐨 𝐀𝐏𝐈 𝐯𝐞𝐫𝐬𝐢𝐨𝐧𝐢𝐧𝐠 Even a small contract update can disrupt consumers. Backward compatibility protects trust across services. • 𝐓𝐨𝐨 𝐦𝐚𝐧𝐲 𝐦𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬 Over splitting creates operational drag. More services do not always mean better architecture. • 𝐈𝐠𝐧𝐨𝐫𝐢𝐧𝐠 𝐝𝐚𝐭𝐚 𝐜𝐨𝐧𝐬𝐢𝐬𝐭𝐞𝐧𝐜𝐲 Without a clear consistency strategy, transactions become unreliable. This is where Sagas and eventual consistency matter. • 𝐒𝐲𝐧𝐜𝐡𝐫𝐨𝐧𝐨𝐮𝐬 𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐜𝐲 𝐜𝐡𝐚𝐢𝐧 Too many blocking calls create fragile service flows. One slowdown can trigger cascading failures. • 𝐍𝐨 𝐟𝐚𝐮𝐥𝐭 𝐢𝐬𝐨𝐥𝐚𝐭𝐢𝐨𝐧 A single failing component should not take down the rest of the platform. Isolation patterns improve resilience. • 𝐂𝐡𝐚𝐭𝐭𝐲 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧 Excessive service to service calls increase latency fast. Coarse grained APIs and async messaging reduce noise. • 𝐋𝐚𝐜𝐤 𝐨𝐟 𝐨𝐛𝐬𝐞𝐫𝐯𝐚𝐛𝐢𝐥𝐢𝐭𝐲 When logging, tracing, and metrics are weak, failures become harder to detect and fix. • 𝐒𝐡𝐚𝐫𝐞𝐝 𝐝𝐚𝐭𝐚𝐛𝐚𝐬𝐞 When multiple services use one database, ownership becomes blurry. Independent data boundaries preserve autonomy. • 𝐇𝐚𝐫𝐝𝐜𝐨𝐝𝐞𝐝 𝐜𝐨𝐧𝐟𝐢𝐠𝐮𝐫𝐚𝐭𝐢𝐨𝐧 If every config change needs redeployment, agility suffers. Externalized configuration supports faster adaptation. Microservices are powerful. But only when architecture decisions support clarity, resilience, and scale. Follow Umair Ahmad for more insights

  • View profile for Arunkumar Palanisamy

    Integration Architect → Senior Data Engineer | AI/ML | 19+ Years | AWS, Snowflake, Spark, Kafka, Python, SQL | Retail & E-Commerce

    2,950 followers

    𝗧𝗵𝗲𝘀𝗲 𝗮𝗿𝗲𝗻'𝘁 𝗵𝘆𝗽𝗼𝘁𝗵𝗲𝘁𝗶𝗰𝗮𝗹.  I have seen every one across messaging systems, data pipelines, and integration platforms. 𝟭𝟬 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗮𝗻𝘁𝗶-𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝘁𝗵𝗮𝘁 𝗹𝗼𝗼𝗸 𝗳𝗶𝗻𝗲 𝗶𝗻 𝗱𝗲𝘃 𝗮𝗻𝗱 𝗯𝗿𝗲𝗮𝗸 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻: 1. 𝗕𝗹𝗶𝗻𝗱 𝗮𝗽𝗽𝗲𝗻𝗱𝘀 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗱𝗲𝗱𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 Pipeline runs twice. Now you have duplicates. No dedup key, no idempotency just data you can't trust. 2. 𝗡𝗼 𝗱𝗲𝗮𝗱 𝗹𝗲𝘁𝘁𝗲𝗿 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆 Failed messages disappear or block the queue. Nobody knows what failed or why. 3. 𝗧𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝘀𝗰𝗵𝗲𝗺𝗮𝘀 𝗮𝘀 𝘀𝘁𝗮𝗯𝗹𝗲 A source renames a column. Your pipeline runs successfully producing wrong results for days. 4. 𝗨𝗻𝗯𝗼𝘂𝗻𝗱𝗲𝗱 𝗿𝗲𝘁𝗿𝗶𝗲𝘀 One bad message triggers infinite retries. A transient failure becomes a retry storm consuming all your resources. 5. 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗼𝗻𝗹𝘆 𝗲𝗿𝗿𝗼𝗿𝘀, 𝗻𝗼𝘁 𝗹𝗮𝗴 Zero errors reported while consumer lag grows silently. Dashboards show yesterday's data. 6. 𝗛𝗮𝗿𝗱𝗰𝗼𝗱𝗲𝗱 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗲𝘃𝗲𝗿𝘆𝘄𝗵𝗲𝗿𝗲 One environment change breaks twelve pipelines. No config management, no abstraction. 7. 𝗡𝗼 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝗲𝗿 𝗮𝗻𝗱 𝗰𝗼𝗻𝘀𝘂𝗺𝗲𝗿 Both sides assume the other will not change. When one does, the breakage is silent. 8. 𝗕𝗮𝗰𝗸𝗳𝗶𝗹𝗹𝗶𝗻𝗴 𝗯𝘆 𝗿𝗲𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 A one-day fix triggers a full historical reprocess. Production compute competes with the backfill. 9. 𝗢𝗻𝗲 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗱𝗼𝗶𝗻𝗴 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 Ingestion, transformation, validation, delivery one job. When it fails, everything fails. 10. 𝗡𝗼 𝗼𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗯𝗼𝘂𝗻𝗱𝗮𝗿𝘆 The pipeline works. The data is wrong. Nobody owns the space between systems where problems live. These anti-patterns do not announce themselves. They accumulate. Anti-patterns are rarely exotic they're the defaults we never challenged. Which of these is quietly running in your system right now? ♻️ Repost to help others ➕ Follow Arunkumar Palanisamy for data engineering and integration insights #DataEngineering #SystemDesign #DataArchitecture

  • Interview Conversation Role: RTE Topic: RTE Anti patterns 👴 Interviewer: "Can you explain some common RTE anti-patterns and how you avoid them?" 🧑 Candidate: "I make sure to follow the SAFe framework and ensure all teams stick to the processes." 👴 Interviewer: "Let’s dig deeper. Imagine you’re an RTE, and teams on the ART are becoming disengaged because they feel overly controlled by the process. Additionally, dependency management is creating delays, and leaders are questioning the value delivered. How would you address this?" 🧑 Candidate: "I’d ask the teams to follow the process better and ensure dependencies are tracked." What a skilled RTE should have answered: ---------------------------------------------- Anti-patterns can disrupt the flow and outcomes of an Agile Release Train (ART), and recognizing them early is key to effective leadership. ✍️ One common anti-pattern is over-controlling teams instead of empowering them. If teams feel micromanaged, they lose motivation and creativity. In such situations, I would shift focus to enabling teams by providing them the tools and autonomy to resolve blockers, while stepping back to act as a servant leader. ✍️ Another anti-pattern is treating PI Planning as just a ceremonial event. It’s critical to ensure PI Planning is a collaboration space, not just a checklist activity. For example, I’ve facilitated breakout sessions where teams openly challenge timelines, ensuring dependencies and risks are genuinely addressed instead of glossed over. ✍️ Finally, mismanaging metrics is a major anti-pattern. Metrics like predictability or feature completion should not be used to assign blame but to identify improvement areas. Example: I once introduced a ‘Learning from Metrics’ workshop to highlight trends and foster a no-blame culture. This transformed leadership conversations into constructive dialogues. By avoiding these anti-patterns, I can drive alignment, improve team engagement, and deliver consistent value across the ART. 💡 Key Takeaway: A great RTE doesn’t just enforce processes—they foster collaboration, empower teams, and create a culture of continuous improvement. #AgileLeadership #SAFeFramework #RTE #ContinuousImprovement

  • View profile for Anton Martyniuk

    Helping 100K+ .NET Engineers reach Senior and Software Architecture level | Microsoft MVP | .NET Software Architect | Founder: antondevtips

    100,400 followers

    𝟭𝟮 𝗪𝗲𝗯 𝗔𝗣𝗜 𝗔𝗻𝘁𝗶-𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝘁𝗵𝗮𝘁 𝗮𝗿𝗲 𝘀𝗶𝗹𝗲𝗻𝘁𝗹𝘆 𝗸𝗶𝗹𝗹𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 .𝗡𝗘𝗧 𝗮𝗽𝗽𝘀 In my 12 years of experience, I have reviewed dozens of codebases. Most teams don't even realize they have these problems until something breaks in production. Here is the full list of anti-patterns and how to fix them: 𝟭. 𝗙𝗮𝘁 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 ❌ Business logic, validation, and data access all inside controllers. ✅ Move logic into services, handlers, or MediatR pipelines. 𝟮. 𝗡𝗼 𝗜𝗻𝗽𝘂𝘁 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 ❌ Accepting raw user input without any checks. ✅ Use FluentValidation or DataAnnotations to validate every request. 𝟯. 𝗥𝗲𝘁𝘂𝗿𝗻𝗶𝗻𝗴 𝗥𝗮𝘄 𝗘𝘅𝗰𝗲𝗽𝘁𝗶𝗼𝗻𝘀 ❌ Exposing stack traces and internal errors to API clients. ✅ Use global exception handling and return ProblemDetails. 𝟰. 𝗕𝗹𝗼𝗰𝗸𝗶𝗻𝗴 𝗔𝘀𝘆𝗻𝗰 𝘄𝗶𝘁𝗵 .𝗥𝗲𝘀𝘂𝗹𝘁 𝗼𝗿 .𝗪𝗮𝗶𝘁() ❌ This kills thread pool threads and destroys scalability under load. ✅ Use async/await all the way down. 𝟱. 𝗜𝗴𝗻𝗼𝗿𝗶𝗻𝗴 𝗖𝗮𝗻𝗰𝗲𝗹𝗹𝗮𝘁𝗶𝗼𝗻𝗧𝗼𝗸𝗲𝗻𝘀 ❌ Wasting server resources on requests the client already abandoned. ✅ Pass CancellationToken through every async endpoint and query. 𝟲. 𝗡𝗼 𝗣𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻 ❌ Returning entire database tables in a single response. ✅ Add pagination, filtering, and sorting on every collection endpoint. 𝟳. 𝗪𝗿𝗼𝗻𝗴 𝗛𝗧𝗧𝗣 𝗦𝘁𝗮𝘁𝘂𝘀 𝗖𝗼𝗱𝗲𝘀 ❌ Returning 200 OK for everything, even errors. ✅ Use proper codes: 400, 404, 409, 422, 500. 𝟴. 𝗢𝘃𝗲𝗿-𝗳𝗲𝘁𝗰𝗵𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 ❌ Querying all columns and joins when you only need a few fields. ✅ Use projections with Select() and return only what the client needs. 𝟵. 𝗥𝗲𝘁𝘂𝗿𝗻𝗶𝗻𝗴 𝗘𝗙 𝗘𝗻𝘁𝗶𝘁𝗶𝗲𝘀 𝗮𝘀 𝗔𝗣𝗜 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲𝘀 ❌ Exposing your database models directly to the client. ✅ Map to DTOs or response models to control serialization. 𝟭𝟬. 𝗡𝗼 𝗥𝗮𝘁𝗲 𝗟𝗶𝗺𝗶𝘁𝗶𝗻𝗴 ❌ Leaving your API wide open to abuse and DDoS. ✅ Use the built-in Rate Limiting middleware in ASP .NET Core. 𝟭𝟭. 𝗡𝗼 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 ❌ Zero visibility into what is happening inside your API. ✅ Add structured logging, distributed tracing, and metrics with OpenTelemetry. 𝟭𝟮. 𝗡𝗼 𝗜𝗱𝗲𝗺𝗽𝗼𝘁𝗲𝗻𝗰𝘆 𝗼𝗻 𝗠𝘂𝘁𝗮𝘁𝗶𝗻𝗴 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀 ❌ Retries create duplicate records and unwanted side effects. ✅ Use idempotency keys on POST operations. 👉 You don't need to fix all 12 at once Pick the top 3 that hurt your project the most. Fix those first. Which of these anti-patterns have you seen the most in your projects? How did you fix them? Let me know in the comments 👇 👉 If you want to reach the top 1% of .NET developers, join 24,000+ engineers reading my .NET Newsletter: ↳ https://lnkd.in/dtxwnFGR —— ♻️ Repost to help other developers build better APIs ➕ Follow me ( Anton Martyniuk ) to improve your .NET and Architecture Skills 📌 Save this post for future reference!

  • A favorite anti-pattern of mine is individual work assignments vs. team ownership of the work. The argument for the former is "it holds people accountable!" That's not a strawman; that's literally one of the arguments. Another is less patronizing, "I can point to work each person has done at review time." Our goal is not to optimize for HR. Our goal is to fulfill business goals. When you assign work individually, everyone looks only at their own work. If someone is struggling or is unexpectedly out, the team loses visibility and that work is not delivered. HR is happy, but again, that's not our goal. The team owns the team's work. You can mob, pair, or work on small changes as individuals, but the team is accountable and should prioritize things appropriately. Our highest priority is finishing things that have been started. If you document your process flow from refining work to delivery, the closer you are to production the higher the priority of the work. For example, if someone needs code reviewed, that is a higher priority than coding. If someone is working on something already, ask if you can help them finish before you start new work. The started work is a higher priority than unstarted work. Looking busy isn't the goal. It's much better to finish 80% of the things we planned than to have everything 80% complete.

  • View profile for Neha Bhargava

    Senior Software Engineer | JavaScript, React, Angular, Node | WTM Ambassador

    36,228 followers

    10 React Antipatterns Every Web Developer Should Know. React is quite friendly and has a low learning curve, but once your projects get bigger and bigger, you might run into problems. These patterns will help you solve these and even help you during interviews. 1. Mixing State Across Components - Storing state in multiple components can cause redundancy and inconsistencies. - This leads to bugs and unpredictable behavior. - Lift state to a common parent or use global state management like Redux or Context API. 2. Overusing Inline Functions - Defining functions directly in JSX leads to unnecessary re-renders. - This degrades performance. - Define functions outside the render scope or use `useCallback` to memoize them. 3. Using Side Effects in Render - Running side effects in the render method can cause infinite loops or inconsistent states. - This makes the UI unresponsive. - Use `useEffect` to handle side effects correctly. 4. Complex State Logic in Components - Complex logic within component state makes the component hard to manage and prone to bugs. - Extract complex logic into custom hooks or utility functions. 5. Directly Mutating State - Directly changing state without `setState` causes React to miss updates. - This results in stale or incorrect data being displayed. - Always use the state updater function to modify state. 6. Ignoring Key Props in Lists - Missing unique `key` props for list items leads to misidentification and improper updates. - This causes elements to be out of order or misrendered. - Provide a unique `key` for each list item. 7. Excessive Re-renders - Frequent state or prop updates trigger unnecessary re-renders. - This slows down your application. - Use `React.memo` and `useMemo` to optimize rendering. 8. Misusing `useEffect` Dependencies - Incorrect `useEffect` dependencies can cause effects to run too often or not at all. - This leads to performance issues or missed updates. - List all dependencies correctly to ensure proper effect execution. 9. Deeply Nested Components - Excessive nesting makes code harder to read and manage. - This increases the risk of bugs and tight coupling. - Break down components into smaller, reusable ones. 10. Ignoring Prop Drilling Issues - Prop drilling through many layers makes data flow hard to manage. - This complicates maintenance and scaling. - Use context or state management libraries to avoid excessive prop drilling.

  • View profile for Joe Devon

    Founder: A11y Audits, #GAAD | Podcaster | Public Speaker

    12,992 followers

    I'm running an experiment to see if giving LLMs personal context about disability produces better accessibility outcomes than traditional role-based instructions. Instead of the typical accessibility system prompts, I'm going to swap in the following prompt for the next week or two to see if it works better. I'm using XML tags because LLMs (esp. Claude) like structured prompts (and data): <prompt> <role> You are a colorblind Senior Software Engineer specializing in web accessibility and code quality. Before software engineering, you worked in visual design, which gave you a strong foundation in accessible color systems, contrast ratios, and visual hierarchy. </role> <family_context> * Your father is blind - Uses JAWS and NVDA daily * Your mother is Deaf - Relies on captions and visual alternatives * Your sister has quadriplegia - Uses Dragon NaturallySpeaking and adaptive switches * Your daughter has cerebral palsy - Uses eye-gaze tracking and needs extended interaction times You've watched them use the web for years. You know what works and what doesn't. </family_context> <engineering_principles> * Semantic HTML over ARIA workarounds * Native elements over custom widgets * Keyboard accessibility is mandatory * Test with actual assistive technology and users with disabilities * Remove real barriers, not just pass audits * If it's redundant for AT, don't add it </engineering_principles> <anti_patterns> Common mistakes you catch and fix: * `<div role="button">` instead of `<button>` * Adding ARIA roles to semantic elements (e.g., `<nav role="navigation">`) * Focus indicators removed for aesthetics * Click targets under 44×44px * Color as the only indicator * Keyboard traps in modals * Auto-play without controls </anti_patterns> </prompt> Try it out if you like and give me feedback on improving the prompt. What has worked well for you?

Explore categories