How to Overcome AI-Driven Coding Challenges

Explore top LinkedIn content from expert professionals.

Summary

AI-driven coding challenges refer to the unique obstacles programmers encounter when using artificial intelligence tools to write or debug code, requiring new strategies to stay in control and ensure quality. Overcoming these challenges means learning how to communicate clearly with AI, review its outputs carefully, and use it for the right tasks while maintaining human oversight.

  • Clarify your goals: Be specific about what you want from the AI, outlining your requirements and constraints so it generates useful, targeted code or suggestions.
  • Review and verify: Always check the code that the AI provides, run tests, and make sure it aligns with your standards before integrating it into your projects.
  • Use AI wisely: Let AI handle routine or repetitive tasks like boilerplate code and documentation, but keep critical design decisions and problem-solving in your own hands.
Summarized by AI based on LinkedIn member posts
  • View profile for Shrey Shah

    AI @ Microsoft | I teach harness engineering | Cursor Ambassador | V0 Ambassador

    16,881 followers

    After spending 1000+ hours coding with AI in Cursor, here's what I learned: 1️⃣ Treat AI like your forgetful genius friend, brilliant but always needing reminders of your goals. 2️⃣ Context rules everything. Regularly reset, condense, and document your sessions. Your efficiency skyrockets when context is clear. 3️⃣ Start by sharing your vision. AI can read code but not minds; clarity upfront saves countless revisions. 4️⃣ Premium models pay off. Gemini 2.5 Pro (1M tokens) or Claude 4 Sonnet are worth every penny when tackling tough problems. 5️⃣ Brief AI as you would onboard a junior dev, clearly explain architecture, constraints, and goals upfront. 6️⃣ Leverage rules files as your hidden superpower. Preset your coding patterns and workflows to start smart every time. 7️⃣ Collaborate with AI first. Discuss and validate ideas before writing any code; it dramatically reduces wasted effort. 8️⃣ Keep everything documented. Markdown-based project logs make complex tasks manageable and ensure seamless handovers. 9️⃣ Watch your context window closely. After halfway, productivity dips, stay sharp with quick resets and concise summaries. 🔟 Version-control your rules. Team-wide knowledge-sharing ensures consistent quality and rapid onboarding. If these insights help you level up, ♻️ reshare to boost someone else's AI coding skills today!

  • View profile for Ado Kukic

    Community, Claude, Code

    11,907 followers

    I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!

  • View profile for Arvind Telharkar

    Software Development Engineer at AWS Healthcare AI | Healthcare AI Infrastructure | Applied AI | Agentic AI | Computer Science | Artificial Intelligence | Software development

    20,961 followers

    AI won't replace engineers.  But engineers who ship 5x faster & safer will replace those who don't. I've been shipping code with AI assistance at AWS since 2024. But it took me a few weeks to figure out how to actually use AI tools without fighting them. Most of what made the difference isn't in any tutorial.  It's the judgment you build by doing. Here's what worked for me: 1. Take the lead. •) AI doesn't know your codebase, your team's conventions, or why that weird helper function exists. You do. Act like the tech lead in the conversation. •) Scope your asks tightly. "Write a function that takes a list of user IDs and returns a map of user ID to last login timestamp" works. "Help me build the auth flow" gets you garbage. •) When it gives you code, ask it to explain the tradeoffs. 2. Use it for the boring & redundant things first •) Unit tests are the easiest win. Give it your function, tell it the edge cases you care about, let it generate the test scaffolding. •) Boilerplate like mappers, config files, CI scripts. Things that take 30 minutes but need zero creativity. •) Regex is where AI shines. Describe what you want to match and it hands you a working pattern in seconds. •) Documentation too. Feed it your code, ask for inline comments or a README draft. You'll still edit it, but the first draft is free. 3. Know when to stop prompting and start coding •) AI hallucinates confidently. It will tell you a method exists when it doesn't. It will invent API parameters. Trust but verify. •) Some problems are genuinely hard. Race conditions, complex state management, weird legacy interactions. AI can't reason about your system the way you can. •) use AI to get 60-70% there fast, then take over. The last 30% is where your judgment matters. 4. Build your own prompt library •) Always include language, framework, and constraints. "Write this in Python <desired-version>, no external dependencies, needs to run in Lambda" gets you usable code. "Write this in Python" gets you a mess. •) Context is everything. Paste the relevant types, the function signature, the error message. The more AI knows, the less you fix. •) Over time, you'll develop intuition for what AI is good at and what it's bad at. That intuition is the core skill. AI tools are multipliers. If your fundamentals are weak, they multiply confusion.  If your fundamentals are strong, they multiply speed & output. Learn to work with them, it will give you a ton of ROI.

  • View profile for Puneet Patwari

    Principal Software Engineer @Atlassian| Ex-Sr. Engineer @Microsoft || Sharing insights on SW Engineering, Career Growth & Interview Preparation

    67,748 followers

    3 months ago, Meta launched their new AI-enabled coding round and made it part of the standard loop. In the last 10 weeks, I have helped 2 Senior Engineers clear Meta’s loops, and if you are preparing for it, these are the three big things I want you to remember: [1] Use AI like a junior pair, not as your replacement - You own the solution. Let the model handle boilerplate, parsing, and test scaffolding, but you decide the approach. - Ask for small chunks of code, not giant files. Smaller pieces are easier to understand and fix. - Review everything like a PR. Check types, edge cases, and error paths before you trust any suggestion. [2] Train on the exact scenarios - Practice on a CoderPad-style setup or tools like Cursor, not only on LeetCode. Get used to AI in the editor. - Rehearse three things: building a small feature, extending an unfamiliar multi-file codebase, and debugging failing tests. - For each task, list edge cases on paper first. Then write or edit unit tests, and only then touch the main code. [3] Pipeline your workflow, or the 60 minutes will vanish - While the AI is generating code, you prepare the next prompt, think about edge cases, or scan other files. - When tests run, read logs and mark suspicious areas so you know exactly where to look next. - Learn to tighten your prompts so you get focused, high-signal answers instead of walls of verbose output. People who can get more done with fewer, sharper prompts will have a real advantage as this format spreads across companies and possibly gets adopted across the industry. - Keep talking at a calm pace. Share what you are checking and why, instead of going silent or narrating every keystroke. Use AI tools in your daily work and side projects now, not just one week before the interview. If you can stay in control of the problem, use the model for speed, and verify everything with tests, you will be in a much better position for Meta’s new round, and for similar rounds that other companies may roll out next. – P.S: Say Hi on Twitter: https://lnkd.in/g9H82Q98 — P.P.S: Feel free to reach out to me if you're preparing for a switch, want to chat about interview preparation, or how to move to the next level in your career: https://lnkd.in/guttEuU7

  • View profile for Esco Obong

    Sr SWE @ Airbnb | Follow for LLMs, LeetCode + System Design & Career Growth (ex-Uber)

    37,478 followers

    If you find yourself constantly refactoring AI-generated code, you are skipping the most important step: The Conversation. Here’s the workflow that gives me high-quality code on the first write to disk: 1. Start with a conversation, not code • Explain the problem to the LLM in detail. • Tell it explicitly: “Propose an approach first. Show alternatives. Do not write code until I approve.” • Review the proposal, poke holes in it, iterate, then let it generate code. Treat it like a cognitive power tool, not an autocomplete. 2. Pick models that actually follow instructions • In my experience, GPT-5 high variant with codex is the best at respecting constraints and following “do not code yet” style directives. • Claude Sonnet 4.5 and Claude Opus 4 are solid runner-ups. • Many other models tend to ignore “do not code” and sneak in extra stuff you never asked for. 3. Set your coding standards once • Have something like a Claude/Agents.md (or equivalent system prompt) that defines: • Coding style • Architecture preferences • Clean code principles This becomes your reusable “engineering brain” the model loads every time that writes high quality code by default. 4. Control your context size • Don’t let the thread get bloated. • Use commands like /compact (or your tool’s equivalent) frequently. • Long, noisy context = degraded output quality. This workflow has made my coding sessions faster, more predictable, and has dramatically reduced the amount of refactoring I need to do because all the guidance is given up front.

  • View profile for Anshul Sao

    Building Praxis | Co-founder & CTO @ Facets

    4,709 followers

    One of the biggest challenges with using AI coding tools like Aider and Cursor in brownfield projects is the time lost in setting context. Every time a new developer (or even an AI assistant) joins the project, they have to figure out which files are needed for a particular task and how they connect. We tried something simple, and it made a huge difference. 📌 Instead of letting AI generate code and moving on, we ask it to document what each file does once a task is completed. We commit this to a context.yaml file alongside the code. The next person—or AI tool—that needs to work on it has instant context. No more digging through files trying to understand what’s happening. 📌 Another small but effective hack: saving useful AI prompts as part of the codebase. If we find a great prompt for generating Swagger docs, writing a new API, or refactoring legacy code, we commit it in a /prompts/ folder. It’s like leaving behind a playbook that speeds up future work. 📌 The best part? Now, you can ask the AI agent which files to include for a given task. Instead of scanning the entire codebase, the AI can use the context.yaml to suggest the right files. AI in collaboration is much more powerful than individual capabilities. These small changes have saved us hours of effort. AI is great at writing code, but it’s even better when we help it understand the project. How do you manage context when using AI in brownfield projects? I'd love to hear what’s working for you. 👇

  • View profile for Sarthak Rastogi

    AI engineer | Posts on agents + advanced RAG | Experienced in LLM research, ML engineering, Software Engineering

    25,251 followers

    AI-generated code isn't just for weekend projects and vide-coding. Airbnb just did an LLM-driven code migration that took just 6 weeks worth of engineering time instead of the estimated 1.5 years. - They kicked off the migration by breaking down the process into a series of automated validation and refactor steps. This state-machine-like approach moved each file through stages, letting the pipeline handle files while also keeping track of progress. - They built in retry loops to improve success rates. Each time a file encountered an error, the system retried the validation and prompted the LLM with updated context and errors. This brute-force method allowed for the fixing of many simple-to-medium complexity files. - To handle more complex files, they significantly increased the context fed into the prompts. Each prompt drew from a lot of related files and examples, so the LLM had the best chance of understanding the specific patterns and requirements needed for the migration. - After reaching a 75% success rate, the team took a systematic approach to tackle the remaining 900 files. They introduced a system that commented on the migration status, allowing them to identify common pitfalls and refine their scripts accordingly. - Using a "sample, tune, and sweep" strategy, they iteratively improved their scripts over four days, pushing the success rate from 75% to 97%. This let them significantly reduce the remaining workload while still making sure that thorough testing coverage remained intact. Link to the blog post from Airbnb: https://lnkd.in/gPmYFQAP #AI #LLMs #GenAI

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41,884 followers

    Most AI coders (Cursor, Claude Code, etc.) still skip the simplest path to reliable software: make the model fail first. Test-driven development turns an LLM into a self-correcting coder. Here’s the cycle I use with Claude (works for Gemini or o3 too): (1) Write failing tests – “generate unit tests for foo.py covering logged-out users; don’t touch implementation.” (2) Confirm the red bar – run the suite, watch it fail, commit the tests. (3) Iterate to green – instruct the coding model to “update foo.py until all tests pass. Tests stay frozen!” The AI agent then writes, runs, tweaks, and repeats. (4) Verify + commit – once the suite is green, push the code and open a PR with context-rich commit messages. Why this works: -> Tests act as a concrete target, slashing hallucinations -> Iterative feedback lets the coding agent self-correct instead of over-fitting a one-shot response -> You finish with executable specs, cleaner diffs, and auditable history I’ve cut debugging time in half since adopting this loop. If you’re agentic-coding without TDD, you’re leaving reliability and velocity on the table. This and a dozen more tips for developers building with AI in my latest AI Tidbits post https://lnkd.in/gTydCV9b

  • View profile for Adrian Brudaru

    Open source pipelines - dlthub.com

    14,026 followers

    The hard part of AI-assisted coding isn’t generation. It’s recovery. In data engineering, long term success depends less on how fast you build a pipeline and more on how quickly you can recover when it breaks. Long term succesful LLM powered coding is about how fast you can debug, verify, and course-correct after the model gives you something wrong but confident. The actual loop looks like this: - Inspect every unknown function or API call - Interrogate the model’s assumptions - Roll back, try a different branch - Test and validate manually - Only then: commit and move forward Today, this recovery loop is mostly manual. Split tabs, copied prompts, intuition. Everyone’s optimizing for generation speed, but reliability lives in the recovery path. That’s where the tools today are still missing.

Explore categories