We know LLMs can substantially improve developer productivity. But the outcomes are not consistent. An extensive research review uncovers specific lessons on how best to use LLMs to amplify developer outcomes. 💡 Leverage LLMs for Improved Productivity. LLMs enable programmers to accomplish tasks faster, with studies reporting up to a 30% reduction in task completion times for routine coding activities. In one study, users completed 20% more tasks using LLM assistance compared to manual coding alone. However, these gains vary based on task complexity and user expertise; for complex tasks, time spent understanding LLM responses can offset productivity improvements. Tailored training can help users maximize these advantages. 🧠 Encourage Prompt Experimentation for Better Outputs. LLMs respond variably to phrasing and context, with studies showing that elaborated prompts led to 50% higher response accuracy compared to single-shot queries. For instance, users who refined prompts by breaking tasks into subtasks achieved superior outputs in 68% of cases. Organizations can build libraries of optimized prompts to standardize and enhance LLM usage across teams. 🔍 Balance LLM Use with Manual Effort. A hybrid approach—blending LLM responses with manual coding—was shown to improve solution quality in 75% of observed cases. For example, users often relied on LLMs to handle repetitive debugging tasks while manually reviewing complex algorithmic code. This strategy not only reduces cognitive load but also helps maintain the accuracy and reliability of final outputs. 📊 Tailor Metrics to Evaluate Human-AI Synergy. Metrics such as task completion rates, error counts, and code review times reveal the tangible impacts of LLMs. Studies found that LLM-assisted teams completed 25% more projects with 40% fewer errors compared to traditional methods. Pre- and post-test evaluations of users' learning showed a 30% improvement in conceptual understanding when LLMs were used effectively, highlighting the need for consistent performance benchmarking. 🚧 Mitigate Risks in LLM Use for Security. LLMs can inadvertently generate insecure code, with 20% of outputs in one study containing vulnerabilities like unchecked user inputs. However, when paired with automated code review tools, error rates dropped by 35%. To reduce risks, developers should combine LLMs with rigorous testing protocols and ensure their prompts explicitly address security considerations. 💡 Rethink Learning with LLMs. While LLMs improved learning outcomes in tasks requiring code comprehension by 32%, they sometimes hindered manual coding skill development, as seen in studies where post-LLM groups performed worse in syntax-based assessments. Educators can mitigate this by integrating LLMs into assignments that focus on problem-solving while requiring manual coding for foundational skills, ensuring balanced learning trajectories. Link to paper in comments.
How Llms Improve Coding Tasks
Explore top LinkedIn content from expert professionals.
Summary
Large language models (LLMs) use artificial intelligence to assist with coding tasks by generating, reviewing, and improving code in response to prompts from users. These tools can speed up routine development work, help catch errors, and support collaboration, but still require human guidance and oversight for accuracy and quality.
- Iterative improvement: Try refining prompts or asking for code reviews from LLMs multiple times to get more robust and optimized solutions.
- Combine human review: Always pair LLM-generated code with manual checks and thorough testing to catch subtle bugs or security issues that AI might miss.
- Document and share: Use LLMs to produce clear documentation, summarize changes, and facilitate communication among team members to keep projects organized and onboarding easy.
-
-
I’ve been building and managing data systems at Amazon for the last 8 years. Now that AI is everywhere, the way we work as data engineers is changing fast. Here are 5 real ways I (and many in the industry) use LLMs to work smarter every day as a Senior Data Engineer: 1. Code Review and Refactoring LLMs help break down complex pull requests into simple summaries, making it easier to review changes across big codebases. They can also identify anti-patterns in PySpark, SQL, and Airflow code, helping you catch bugs or risky logic before it lands in prod. If you’re refactoring old code, LLMs can point out where your abstractions are weak or naming is inconsistent, so your codebase stays cleaner as it grows. 2. Debugging Data Pipelines When Spark jobs fail or SQL breaks in production, LLMs help translate ugly error logs into plain English. They can suggest troubleshooting steps or highlight what part of the pipeline to inspect next, helping you zero in on root causes faster. If you’re stuck on a recurring error, LLMs can propose code-level changes or optimizations you might have missed. 3. Documentation and Knowledge Sharing Turning notebooks, scripts, or undocumented DAGs into clear internal docs is much easier with LLMs. They can help structure your explanations, highlight the “why” behind key design choices, and make onboarding or handover notes quick to produce. Keeping platform wikis and technical documentation up to date becomes much less of a chore. 4. Data Modeling and Architecture Decisions When you’re designing schemas, deciding on partitioning, or picking between technologies (like Delta, Iceberg, or Hudi), LLMs can offer quick pros/cons, highlight trade-offs, and provide code samples. If you need to visualize a pipeline or architecture, LLMs can help you draft Mermaid or PlantUML diagrams for clearer communication with stakeholders. 5. Cross-Team Communication When collaborating with PMs, analytics, or infra teams, LLMs help you draft clear, focused updates, whether it’s a Slack message, an email, or a JIRA comment. They’re useful for summarizing complex issues, outlining next steps, or translating technical decisions into language that business partners understand. LLMs won’t replace data engineers, but they’re rapidly raising the bar for what you can deliver each week. Start by picking one recurring pain point in your workflow, then see how an LLM can speed it up. This is the new table stakes for staying sharp as a data engineer.
-
𝗖𝗮𝗻 𝗟𝗟𝗠𝘀 𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲 𝗶𝗳 𝘆𝗼𝘂 𝗸𝗲𝗲𝗽 𝗮𝘀𝗸𝗶𝗻𝗴 𝘁𝗵𝗲𝗺 𝘁𝗼 “𝗪𝗥𝗜𝗧𝗘 𝗕𝗘𝗧𝗧𝗘𝗥 𝗖𝗢𝗗𝗘"? 💡 𝗧𝗵𝗲 𝘀𝗵𝗼𝗿𝘁 𝗮𝗻𝘀𝘄𝗲𝗿: 𝗬𝗘𝗦! Interesting experiment by Max Woolf. He gave Claude 3.5 Sonnet a Python challenge: Generate 1 million random integers and find the smallest and largest numbers with a digit sum of 30. The goal? Optimize the code over multiple iterations 𝗯𝘆 𝘀𝗶𝗺𝗽𝗹𝘆 𝗮𝘀𝗸𝗶𝗻𝗴 𝗶𝘁 𝘁𝗼 “𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲.” 𝗥𝗲𝘀𝘂𝗹𝘁𝘀: 1️⃣ Initial Implementation: Basic, functional, but slow (657ms). 2️⃣ Optimized Iteration: Precomputes digit sums, adds parallelism → 2.7x faster. 3️⃣ Enterprise Overengineering: Added multiprocessing, rich metrics, and JIT optimization → 100x faster! 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀: - Iterative prompting works! Performance improved significantly with each iteration of "write better code". - LLMs introduce unique optimizations (e.g., vectorization, JIT compilation), but also subtle bugs that require human review. - Over time, the LLM started adding unnecessary “enterprise” features — a comical form of “going cosmic” for code. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: LLMs can significantly improve code performance with simple prompts—but they’re not perfect. The experiment showed that while LLMs can suggest great optimizations, they also miss the mark or add unnecessary complexity without clear guidance. This is where human oversight comes in. Subtle errors? Misaligned logic? That’s why code specifications and test-driven development are critical when using LLMs. So, next time you’re stuck, just try: “𝘄𝗿𝗶𝘁𝗲 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗱𝗲.” You might be surprised at what it delivers. 😜 Here is the experiment: https://lnkd.in/d4gBsk-y
-
In the last few months, I have explored LLM-based code generation, comparing Zero-Shot to multiple types of Agentic approaches. The approach you choose can make all the difference in the quality of the generated code. Zero-Shot vs. Agentic Approaches: What's the Difference? ⭐ Zero-Shot Code Generation is straightforward: you provide a prompt, and the LLM generates code in a single pass. This can be useful for simple tasks but often results in basic code that may miss nuances, optimizations, or specific requirements. ⭐ Agentic Approach takes it further by leveraging LLMs in an iterative loop. Here, different agents are tasked with improving the code based on specific guidelines—like performance optimization, consistency, and error handling—ensuring a higher-quality, more robust output. Let’s look at a quick Zero-Shot example, a basic file management function. Below is a simple function that appends text to a file: def append_to_file(file_path, text_to_append): try: with open(file_path, 'a') as file: file.write(text_to_append + '\n') print("Text successfully appended to the file.") except Exception as e: print(f"An error occurred: {e}") This is an OK start, but it’s basic—it lacks validation, proper error handling, thread safety, and consistency across different use cases. Using an agentic approach, we have a Developer Lead Agent that coordinates a team of agents: The Developer Agent generates code, passes it to a Code Review Agent that checks for potential issues or missing best practices, and coordinates improvements with a Performance Agent to optimize it for speed. At the same time, a Security Agent ensures it’s safe from vulnerabilities. Finally, a Team Standards Agent can refine it to adhere to team standards. This process can be iterated any number of times until the Code Review Agent has no further suggestions. The resulting code will evolve to handle multiple threads, manage file locks across processes, batch writes to reduce I/O, and align with coding standards. Through this agentic process, we move from basic functionality to a more sophisticated, production-ready solution. An agentic approach reflects how we can harness the power of LLMs iteratively, bringing human-like collaboration and review processes to code generation. It’s not just about writing code; it's about continuously improving it to meet evolving requirements, ensuring consistency, quality, and performance. How are you using LLMs in your development workflows? Let's discuss!
-
LLM models make a TON of mistakes, but with 1. good documentation, 2. good code review, 3. the best models available, you can flawlessly accomplish very large changes, FASTER and BETTER than a human. Here’s a real example. At Formation, we have Session Studio: our live session environment. It’s a real-time system with video, audio, chat, reactions, slides, hand-raising, polls, collaborative coding pads… the works. We recently changed the definitions of participant roles. It was a deep permission and behavior refactor across a complex, real-time surface area with dozens of flags and conditional checks. The kind of change that’s easy to partially ship and quietly break production. Here’s how I used AI to pull it off: 1. Full System Audit: Codex generated a ~1,300-line audit of the entire current state, every permission path, flag, edge case, and role interaction. 2. Proposed Redesign: Codex then wrote a second document detailing every change required to support the new role definitions. 3. Engineering Plan: Using "plan mode" first, Claude merged both documents into a structured engineering spec with clear implementation phases. 4. "Adversarial" Iteration: Claude and Codex iterated on the docs, flagging inconsistencies, ambiguities, and decisions that required human judgment. I acted as editor-in-chief, resolving tradeoffs and clarifying intent. 5. Phased Execution (8 Phases). For each phase: Claude implemented, Codex reviewed, Claude fixed... Repeat until clean, then Final Claude review. Total time: ~24 hours of async back-and-forth. The key insight: LLMs are unreliable in isolation. They’re extremely powerful inside a system of documentation, review, and phased execution.
-
I work at Airbnb where I write 99% of my code with LLMs. One thing you need to understand is they only write shit code if you let them. When you're building high quality production software, writing code is always the 𝗹𝗮𝘀𝘁 𝘀𝘁𝗲𝗽. Your first step is to understand the problem that needs to be solved. Then ideate solutions, consider alternatives, explore tradeoffs and refine your exploration into a concrete plan. Even as you implement the plan task by task you should not be coding a stream of conscious. That leads to bad code design. You should be considering the architecture of the code, abstractions and coming up with a clean way to write it. Only after all this upfront design and planning work do you then start manually typing code with your fingers. That last step is not necessary to do manually anymore. Whenever I think of coding, I immediately reach for an LLM because I use it like a power tool. A carpenter does not leave their power drill on the table when they need to screw in a bolt. Why would you not use an LLM to execute on your plan? You are in the driver's seat, providing direct technical guidance at every step. 𝗬𝗼𝘂𝗿 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲 𝗮𝗻𝗱 𝘀𝗸𝗶𝗹𝗹 𝗹𝗲𝘃𝗲𝗹 𝗱𝗶𝗿𝗲𝗰𝘁𝗹𝘆 𝗶𝗺𝗽𝗮𝗰𝘁 𝗵𝗼𝘄 𝗴𝗼𝗼𝗱 𝘁𝗵𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗶𝘀. No, this is not slower than doing it without LLMs. You should also use LLMs as power tools for research, planning and architecture. This will get you even higher quality software than without them. It allows you to go far beyond due diligence and truly explore, analyze and refine your design fully before any single line of code is written. I use the following workflow to naturally research, design and plan the feature I want to build in the form of a conversation which then gets converted to a formal Spec that the LLM can implement task by task: 1. Explain the problem to the LLM. 2. Give it your ideas for the initial solution 3. Tell it explicitly: “Propose an approach first. Show alternatives to my solution, highlight tradeoffs. Do not write code until I approve.” 4. Review the proposal, poke holes in it, iterate 5. Tell it to write the plan to disk as a spec so you can hand off to another session later 6. Lastly, let it generate code. This is an excerpt from my article “Writing High Quality Production Code With LLMs Is A Solved Problem” full article here on LinkedIn —> https://lnkd.in/d3v-i9iK
-
One AI coding hack that helped me 15x my development output: using design docs with the LLM. Whenever I’m starting a more involved task, I have the LLM first fill in the content of a design doc template. This happens before a single line of code is written. The motivation is to have the LLM show me it understands the task, create a blueprint for what it needs to do, and work through that plan systematically.. –– As the LLM is filing in the template, we go back-and-forth clarifying its assumptions and implementation details. The LLM is the enthusiastic intern, I’m the manager with the context. Again no code written yet. –– Then when the doc is filled in to my satisfaction with an enumerated list of every subtask to do, I ask the LLM to complete one task at a time. I tell it to pause after each subtask is completed for review. It fixes things I don’t like. Then when it’s done, it moves on to the next subtask. Do until done. –– Is it vibe coding? Nope. Does it take a lot more time at the beginning? Yes. But the outcome: I’ve successfully built complex machine learning pipelines that run in production in 4 hours. Building a similar system took 60 hours in 2021 (15x speedup). Hallucinations have gone down. I feel more in control of the development process while still benefitting from the LLM’s raw speed. None of this would have been possible with a sexy 1-prompt-everything-magically-appears workflow. –– How do you get started using LLMs like this? @skylar_b_payne has a really thorough design template: https://lnkd.in/ewK_haJN –– You can also use shorter ones. The trick is just to guide the LLM toward understanding the task, providing each of the subtasks, and then completing each subtask methodically. –– Using this approach is how I really unlocked the power of coding LLMs.
-
Large Language Models (LLMs) possess vast capabilities that extend far beyond conversational AI, and companies are actively exploring their potential. In a recent tech blog, engineers at Faire share how they’re leveraging LLMs to automate key aspects of code reviews, unlocking new ways to enhance developer productivity. At Faire, code reviews are an essential part of the development process. While some aspects require deep project context, many follow standard best practices that do not. These include enforcing clear titles and descriptions, ensuring sufficient test coverage, adhering to style guides, and detecting backward-incompatible changes. LLMs are particularly well-suited for handling these routine review tasks. With access to relevant pull request data—such as metadata, diffs, build logs, and test coverage reports—LLMs can efficiently flag potential issues, suggest improvements, and even automate fixes for simple problems. To facilitate this, the team leveraged an internally developed LLM orchestrator service called Fairey to streamline AI-powered code reviews. Fairey processes chat-based requests by breaking them down into structured steps, such as calling an LLM model, retrieving necessary context, and executing functions. It integrates seamlessly with OpenAI’s Assistants API, allowing engineers to fine-tune assistant behavior and incorporate capabilities like Retrieval-Augmented Generation (RAG). This approach enhances accuracy, ensures context awareness, and makes AI-driven reviews genuinely useful to developers. By applying LLMs in code reviews, Faire demonstrates how AI can enhance developer workflows, boosting efficiency while maintaining high code quality. As companies continue exploring AI applications beyond chat, tools like Fairey provide a glimpse into the future of intelligent software development. #Machinelearning #Artificialintelligence #AI #LLM #codereview #Productivity #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/deaMsxZy
-
AI coding LLMs and tools are improving rapidly. There is a massive amount of value and velocity teams can unlock by using them correctly. One reminder I recently shared internally at Productboard that’s worth repeating more broadly👇 It’s critical to start with a strong product specification. Spend the first 1–2 hours iterating on the spec definition to ensure all requirements are clear and there are no surprises mid-implementation. A few practical tips on how to do that: 🔹 Paste (or even better, pull via MCP) the specs you got from your PM into a Markdown file 🔹 Ask Claude: “Ask me any questions needed to make sure you deeply understand the feature we will be building.” You might get 40–60 questions back - ideally use something like WhisperFlow so you don’t spend the next two hours just answering them 🔹 Ask Claude: “Propose three very different approaches to building this feature and explain their pros and cons in terms of complexity, maintainability, and user value.” Then iterate toward the approach that makes the most sense 🔹 Ask Claude: “Research the codebase, put together an implementation plan for this feature, and come back with additional product questions that need to be answered before implementation.” Context engineering is just as critical. A few tips there: 🔹 Use a “Research → Plan → Implement” staged flow, fully wiping the context window between each stage instead of relying on automatic compaction 🔹 Spend significant time reading, reviewing, and adjusting the outputs of each stage 🔹 Use research sub-agents heavily - you may need to explicitly prompt for this depending on the tool and LLM you’re using When it comes to implementation quality: 🔹 Make sure you truly understand every line of code you push into a PR 🔹 Having the agent walk you through the changes and explain non-obvious parts (especially around libraries or frameworks) is often a great idea Tooling matters more than ever: 🔹 Make sure you deeply understand the features and tricks of the coding tools you use - not easy when tools like Claude Code and Cursor ship updates almost daily 🔹 Invest in AI tooling configuration in your repos 🔹 Invest in better linters - the best teams are often doubling the number of linter rules compared to pre-AI days, giving agents fast and precise feedback 🔹 Constantly update your AGENTS.md / Claude.md files as you notice behaviors that should be adjusted - top teams update these almost daily And finally: 🔹 Share your tips and tricks with colleagues How are you and your teams approaching AI-assisted coding today? What practices have made the biggest difference for you so far?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development