How Developers can Use AI Agents

Explore top LinkedIn content from expert professionals.

Summary

AI agents are intelligent software tools that assist developers in coding, testing, and managing projects by automating repetitive tasks and providing contextual guidance. Using AI agents, developers can streamline workflows, reduce manual effort, and collaborate more efficiently, especially when handling complex or legacy codebases.

  • Document project context: Store documentation and useful prompts in dedicated files within your codebase so both humans and AI agents can easily understand the project structure and requirements.
  • Iterate with testing: Start coding by writing tests first and let the AI agent update your code until all tests pass, which helps catch errors early and keeps your software reliable.
  • Build and monitor: Deploy AI agents for tasks like prototyping or advising, then track their performance and make improvements based on real user interactions and feedback.
Summarized by AI based on LinkedIn member posts
  • View profile for Eric Ma

    Together with my teammates, we solve biological problems with network science, deep learning and Bayesian methods.

    8,285 followers

    Agent-assisted coding transformed my workflow. Most folks aren’t getting the full value from coding agents—mainly because there’s not much knowledge sharing yet. Curious how to unlock more productivity with AI agents? Here’s what’s worked for me. After months of experimenting with coding agents, I’ve noticed that while many people use them, there’s little shared guidance on how to get the most out of them. I’ve picked up a few patterns that consistently boost my productivity and code quality. Iterating 2-3 times on a detailed plan with my AI assistant before writing any code has saved me countless hours of rework. Start with a detailed plan—work with your AI to outline implementation, testing, and documentation before coding. Iterate on this plan until it’s crystal clear. Ask your agent to write docs and tests first. This sets clear requirements and leads to better code. Create an "AGENTS.md" file in your repo. It’s the AI’s university—store all project-specific instructions there for consistent results. Control the agent’s pace. Ask it to walk you through changes step by step, so you’re never overwhelmed by a massive diff. Let agents use CLI tools directly, and encourage them to write temporary scripts to validate their own code. This saves time and reduces context switching. Build your own productivity tools—custom scripts, aliases, and hooks compound efficiency over time. If you’re exploring agent-assisted programming, I’d love to hear your experiences! Check out my full write-up for more actionable tips: https://lnkd.in/eSZStXUe What’s one pattern or tool that’s made your AI-assisted coding more productive? #ai #programming #productivity #softwaredevelopment #automation

  • View profile for Anshul Sao

    Building Praxis | Co-founder & CTO @ Facets

    4,706 followers

    One of the biggest challenges with using AI coding tools like Aider and Cursor in brownfield projects is the time lost in setting context. Every time a new developer (or even an AI assistant) joins the project, they have to figure out which files are needed for a particular task and how they connect. We tried something simple, and it made a huge difference. 📌 Instead of letting AI generate code and moving on, we ask it to document what each file does once a task is completed. We commit this to a context.yaml file alongside the code. The next person—or AI tool—that needs to work on it has instant context. No more digging through files trying to understand what’s happening. 📌 Another small but effective hack: saving useful AI prompts as part of the codebase. If we find a great prompt for generating Swagger docs, writing a new API, or refactoring legacy code, we commit it in a /prompts/ folder. It’s like leaving behind a playbook that speeds up future work. 📌 The best part? Now, you can ask the AI agent which files to include for a given task. Instead of scanning the entire codebase, the AI can use the context.yaml to suggest the right files. AI in collaboration is much more powerful than individual capabilities. These small changes have saved us hours of effort. AI is great at writing code, but it’s even better when we help it understand the project. How do you manage context when using AI in brownfield projects? I'd love to hear what’s working for you. 👇

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41,883 followers

    Most AI coders (Cursor, Claude Code, etc.) still skip the simplest path to reliable software: make the model fail first. Test-driven development turns an LLM into a self-correcting coder. Here’s the cycle I use with Claude (works for Gemini or o3 too): (1) Write failing tests – “generate unit tests for foo.py covering logged-out users; don’t touch implementation.” (2) Confirm the red bar – run the suite, watch it fail, commit the tests. (3) Iterate to green – instruct the coding model to “update foo.py until all tests pass. Tests stay frozen!” The AI agent then writes, runs, tweaks, and repeats. (4) Verify + commit – once the suite is green, push the code and open a PR with context-rich commit messages. Why this works: -> Tests act as a concrete target, slashing hallucinations -> Iterative feedback lets the coding agent self-correct instead of over-fitting a one-shot response -> You finish with executable specs, cleaner diffs, and auditable history I’ve cut debugging time in half since adopting this loop. If you’re agentic-coding without TDD, you’re leaving reliability and velocity on the table. This and a dozen more tips for developers building with AI in my latest AI Tidbits post https://lnkd.in/gTydCV9b

  • View profile for Shalini Goyal

    Executive Director @ JP Morgan | Ex-Amazon || Professor @ Zigurat || Speaker, Author || TechWomen100 Award Finalist

    119,876 followers

    How to Build Your First AI Agent - Step-by-Step Creating an AI agent might sound complex, but by breaking it down into structured steps, you can go from idea to a fully functional agent that solves real problems. Whether you’re building for customer service, research, or automation, following these stages ensures your agent is accurate, useful, and adaptable. 1. Define the Agent’s Purpose Start with clarity. Identify the problem your agent will solve, who will use it, and what kind of inputs and outputs it should handle. This step sets the foundation for everything else. 2. Select Input Sources Decide what kind of data your agent will use - text, voice, API calls, or a mix. Connect it to databases, CRMs, or external APIs, and determine how real-time the data needs to be. 3. Data Preparation & Preprocessing Clean and format your data so it’s ready for your chosen AI model. This might mean tokenizing text, normalizing values, or structuring raw inputs. 4. Choose the Right Model Pick the AI engine that powers your agent - whether it’s an LLM like GPT-4, Claude, or Gemini. Choose between hosted APIs or custom deployments, ensuring it supports your needs like reasoning, retrieval, or chat. 5. Design the Agent Architecture Decide how your agent will operate using decision trees, planners, or tool-driven flows. Use frameworks like LangChain, CrewAI, or AutoGen to connect tools, memory, and prompts efficiently. 6. Craft Prompts & Toolchains Write effective, structured prompts, integrate with APIs, search tools, or calculators, and test until your outputs are accurate and reliable. 7. Test & Validate Run simulations with varied user inputs, check accuracy, and find weaknesses like edge cases or inconsistent answers. 8. Deploy the Agent Host your agent on cloud services (Vercel, AWS, Hugging Face) and add a frontend like a chat interface or voice UI. Ensure logging is in place for performance tracking. 9. Monitor & Improve Watch how users interact with your agent. Track accuracy, latency, and errors. Refine prompts or retrain models when needed. 10. Enable Continuous Learning Let your agent evolve. Feed it real usage data, update tools and APIs, and fine-tune models to handle new scenarios over time. Ready to bring your first AI agent to life? Start small, experiment, and iterate - your first version doesn’t have to be perfect. The key is to build, test, and keep improving.

  • View profile for Shahed Islam

    I Help Small & Mid-Size Companies Implement AI Without the Overwhelm | CEO @ SJ Innovation | CollabAI · BuildYourAI

    13,463 followers

    𝐂𝐚𝐧 𝐀𝐈 𝐫𝐞𝐚𝐥𝐥𝐲 𝐛𝐮𝐢𝐥𝐝 𝐬𝐭𝐚𝐛𝐥𝐞 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞… 𝐨𝐫 𝐝𝐨 𝐲𝐨𝐮 𝐬𝐭𝐢𝐥𝐥 𝐧𝐞𝐞𝐝 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫𝐬? 𝐀𝐟𝐭𝐞𝐫 𝟑 𝐩𝐫𝐨𝐣𝐞𝐜𝐭𝐬, 𝐡𝐞𝐫𝐞’𝐬 𝐦𝐲 𝐚𝐧𝐬𝐰𝐞𝐫. The last few weeks have been crazy. As an entrepreneur, I always push myself—and this time I pushed AI. Our team built and converted 100+ applications on Lovable just to see if it could really replace developers. I even took it further, building a full-fledged internal management system for tasks and projects. The results? Impressive, but eye-opening. → Prototypes are easy. With Lovable or any AI coding tool, you can spin up working software in hours. No design team, no endless meetings. → Stability is hard. When apps get complex, you hit slow queries, bugs, and scaling issues. AI alone can’t fix them. That’s where real developers shine. → AI + Devs = 50% cost savings. By combining AI agentic frameworks (like n8n + CollabAI) with developer oversight, we’ve cut project delivery time and cost in half. Along the way, I had to learn new tools—Lovable, Supabase, GitHub, CodeX, code merge, and many others. That hands-on learning showed me something important: AI coding assistants amplify developers, but they don’t replace them. So where does this leave us? The new tech stack for building fast looks like this: → AI agents as product managers and advisors → Lovable (or similar) for quick prototyping → Supabase for backend support → GitHub + CodeX to stabilize and debug → Agentic frameworks to reduce costs and accelerate workflows At SJ Innovation, our only goal is simple: save clients money and deliver faster. And right now, AI helps us deliver projects 50% faster than before. I’m excited to see where the next 6 months take us. Because the question isn’t “Will AI replace developers?” It’s how fast can we adapt to building with AI together.

  • View profile for Sandeep Uttamchandani, Ph.D.

    Enterprise AI Executive | Scaling AI from Pilot to P&L | Strategy, Products, Governance & Ops | PhD in AI Expert Systems

    6,334 followers

    𝗛𝗼𝘄 𝘁𝗼 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗠𝗮𝗸𝗲 𝗔𝗜 𝗖𝗼𝗱𝗶𝗻𝗴 𝗔𝗴𝗲𝗻𝘁𝘀 𝗗𝗲𝗹𝗶𝘃𝗲𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗳𝗼𝗿 𝗕𝗿𝗼𝘄𝗻𝗳𝗶𝗲𝗹𝗱 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀 Every vendor demo shows pristine greenfield examples. Every enterprise deals with brownfield codebases developed over years. 𝗜'𝘃𝗲 𝗹𝗶𝘃𝗲𝗱 𝘁𝗵𝗶𝘀 𝗱𝗶𝘀𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝗳𝗶𝗿𝘀𝘁𝗵𝗮𝗻𝗱: A 2025 METR study tracked experienced developers using AI coding agents on enterprise systems. The result? 𝟭𝟵% 𝘀𝗹𝗼𝘄𝗲𝗿 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝘁𝗶𝗺𝗲𝘀. Meanwhile, the same developers see 30-55% productivity gains on new projects. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸𝘀: Stop treating AI agents like fancy autocomplete. Start treating them like new hires who need proper onboarding. 𝗪𝗵𝗮𝘁 𝘄𝗲 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱 𝘄𝗮𝘆: Automate Context Engineering to make AI Coding Agents Effective for Brownfield projects. Before your AI writes a single line of code, it needs to understand: • Your business logic (the stuff that's only in Sarah's head from 2018) • Architectural constraints (why that function can't be touched) • Dependencies (what breaks when you change this module) 𝗧𝗵𝗿𝗲𝗲-𝗽𝗵𝗮𝘀𝗲 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝘁𝗵𝗮𝘁'𝘀 𝘄𝗼𝗿𝗸𝗶𝗻𝗴: 𝗣𝗵𝗮𝘀𝗲 𝟭: 𝗔𝗜 𝗮𝘀 "𝗰𝗼𝗱𝗲 𝗮𝗿𝗰𝗵𝗮𝗲𝗼𝗹𝗼𝗴𝗶𝘀𝘁" Map dependencies, identify hotspots, generate missing documentation 𝗣𝗵𝗮𝘀𝗲 𝟮: 𝗕𝘂𝗶𝗹𝗱 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 Continuous documentation, automated requirements extraction, living knowledge base 𝗣𝗵𝗮𝘀𝗲 𝟯: 𝗖𝗜/𝗖𝗗 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 Trust but verify everything, automated validation, self-correcting loops 𝗥𝗲𝗮𝗹 𝗿𝗲𝘀𝘂𝗹𝘁: One team went from 19% slower to 25% faster in 90 days. 𝗧𝗵𝗲 𝗵𝗮𝗿𝗱 𝘁𝗿𝘂𝘁𝗵: This isn't about deploying more agents. It's about making your legacy systems AI-ready. 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻: AI is generating tons of code for your teams. But how many of you are actually measuring what percentage makes it into your final deployed codebase? #EnterpriseAI #TechLeadership #AIStrategy #AICoding

  • View profile for Jason Allen

    Founder | CTO | AI Engineering

    6,789 followers

    ✨ While everyone's focused on AI agent hype, the actual technology enabling them to work autonomously is just starting to catch-on. There's a lot of talk about AI agents and how they're going to bring new levels of automation and efficiency to businesses. What folks aren't talking about though are the inherent limitations of LLMs, especially when it comes to information recency and interacting with other services - and how we'll overcome them. The AI company Anthropic released Model Context Protocol late last year. MCP is an open source protocol that allows LLMs to understand available services and decide which ones to use in real time. In other words, MCP gives AI agents the ability to interact with the outside world in realtime. This is a game changer. This all probably sounds pretty abstract. So let me give you a concrete examples. Here's a few ways that I'm currently using a simple MCP server I developed to assist with application development at Mobility Places: - Using MCP my AI coding agent can automatically review my application log files for errors. It will then analyze the errors and suggest fixes. - The agent can execute external commands that allow it to see the relationships between different class types in my application. For example, it knows how ParkingLocation is related to Customer and can make coding suggestions that align with the relationship. - It generates and executes database queries in my test environment, turning high-level requests into working code. I've found that these capabilities fundamentally change how I approach development problems. Rather than just asking for code, I'm collaborating with a system that understands my entire application environment. Make no mistake - these are just the earliest days for MCP. Software development is only the beginning. In the coming months, expect to see MCP spread like wildfire across industries and use cases as more developers and companies recognize its transformative potential. This technology will usher in the age of AI agents. Learn more about MCP here: https://lnkd.in/em8ApUyJ

  • View profile for Vinay Agastya

    Founder at Ctruh | Building the World’s First AI-powered Unified XR Commerce Studio | Hiring across all levels

    14,659 followers

    Anthropic revealed a secret playbook for building AI agents without wasting millions and failed experiments, and it actually works. The gap between AI agent hype and practical results is massive. But, Anthropic, the $61.5 billion company that built Claude, released a playbook with insights from proven teams. It revealed 8 key principles for building effective AI agents: 1. Pick the right setup: Use rule-based workflows for predictable tasks. Agents excel when tasks need flexible thinking and adaptability. 2. Chain your tasks: Break complex work into sequential steps. Each step should build cleanly on the previous output. 3. Split the work: Multiple specialized agents outperform a single all-purpose one. This allows for expertise in specific areas rather than mediocrity across all. 4. Use an orchestrator: Have a "manager" agent that breaks down tasks. It delegates to specialists and combines their work for final output. 5. Test extensively: Successful teams test thoroughly in sandbox environments. They explore edge cases before deploying to production. 6. Optimize tools: Focus more on giving agents clear tool instructions. Teams spent more time on tools than prompts, with better results. 7. Build feedback loops: Set up systems where one agent creates content. Another evaluates and refines for continuous improvement. 9. Control costs: Set explicit stopping points to prevent budget overruns. Define when agents should pause and request human intervention. Teams succeeding with AI agents aren't the ones with the biggest budgets or most advanced models. They're the methodical builders who treat AI agents like skilled team members, giving them the right tools, clear direction, and room to specialize. Are you using AI agents? #AI #Tech 

  • View profile for Rakesh Gohel

    Scaling with AI Agents | Expert in Agentic AI & Cloud Native Solutions| Builder | Author of Agentic AI: Reinventing Business & Work with AI Agents | Driving Innovation, Leadership, and Growth | Let’s Make It Happen! 🤝

    156,673 followers

    In 2025, it is not about replacing your team with AI Agents It's about empowering them to 10x your workflow... Development, in general, has become fast-paced with the help of coding agents like GitHub Copilot, Cursor and Windsurf. In today's tech landscape, traditional software development faces bottlenecks due to manual processes and the need for constant human oversight. This slows the development cycle and the time taken for improvements and bug fixes. To emphasise, 25% of Google's codebase is now AI-generated, and companies like Cursor and Lovable are making groundbreaking products with very few team members. Today, I will share my most used Agentic SDLC that I use and have implemented for my clients that have 10x'ed their development speed and reduced their development costs. 📌 Let me break down the entire cycle: Start by giving a basic prompt of what your business needs. 1. Problem Definition: - AI Agent: Drafts business needs - Human: Finalises scope and objectives, and allocates resources, ensuring alignment with business goals. 2. Design - Humans: Give AI Agents the plan for their prototype in a markdown file. - AI Agent: Outlines the project plan and drafts detailed requirements with a basic prototype. - Human: Verifies requirements, adjusts the UI UX and validates the requirements to ensure completeness and accuracy. - AI Agent: Creates initial design patterns. 4. Development: - AI Agent: Drafts the codebase, writes unit tests, and prepares documentation. - Human: Perform usability tests, integrates feedback and ensures code quality. 5. Testing: - AI Agent: Automates unit, integration, and system tests and writes bug reports. - Human: Validates test results, conducts manual testing, reviews bug reports, and ensures overall test coverage. 6. Deployment: - AI Agent: Automates pipelines, performs post-deployment checks and ensures smooth deployment. - Human: Supervises the deployment process, manages rollbacks if necessary, and ensures compliance. 7. Maintenance: - AI Agent: Applies updates and patches and monitors system performance. - Human: Acts upon user feedback, provides ongoing support, addresses complex issues, and plans for future enhancements. The details may vary per use case for an organization. Feel free to use this guide to equip your team and 10x your workflow. 📌 And if you want a personalised development plan, feel free to drop a message. If you are a business leader, we've developed frameworks that cut through the hype, including our five-level Agentic AI Progression Framework to evaluate any agent's capabilities in my latest book. 🔗 Book info: https://amzn.to/4irx6nI © Follow this guide if you want to use our content: https://lnkd.in/gTzk2k4b Save 💾 ➞ React 👍 ➞ Share ♻️ & follow for everything related to AI Agents

  • View profile for Sandipan Bhaumik

    Data & AI Technical Lead | Production AI for Regulated Industries | Founder, AgentBuild

    25,129 followers

    𝗜𝗻𝗳𝗿𝗮 𝗮𝘀 𝗖𝗼𝗱𝗲 𝗺𝗲𝗲𝘁𝘀 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀. Imagine you're a developer working on infrastructure deployment. The stuff that powers apps behind the scenes, like servers, databases, networks, etc. You want to automate the setup of cloud infrastructure whenever code is pushed to GitHub. Normally, this would require a lot of manual work: • checking what changed, • writing code for the changes, • validating it, • deploying safely    This workflow shows how you can use AI Agents to do that automatically. 𝗛𝗲𝗿𝗲’𝘀 𝗵𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀, 𝘀𝘁𝗲𝗽-𝗯𝘆-𝘀𝘁𝗲𝗽:  1. Developer pushes code to GitHub.     Maybe it includes a new database, or a new server config.       2. That action triggers the AI system to start analyzing what’s changed.       3. An AI Agent called the 𝗔𝗻𝗮𝗹𝘆𝘇𝗲𝗿 reads the changes for example:     “A new file is added, a new database is required.”       4. It writes down all those changes in a structured format like a recipe.       5. Then another AI Agent called the 𝗦𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘇𝗲𝗿 reads that recipe and writes Terraform code or AWS CDK modules.       6. These are the scripts that can build your cloud infrastructure.       7. A third AI Agent, the 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗼𝗿, checks the generated code to make sure it's:          Secure     Doesn’t break anything     Follows company rules       8. If everything looks good, it deploys the infrastructure automatically.       9. Every step is saved, so there’s an audit trail of who changed what, and why.      10. For critical code, the control is transferred to a human to make the final decision. 𝗪𝗵𝘆 𝗶𝘀 𝘁𝗵𝗶𝘀 𝘂𝘀𝗲𝗳𝘂𝗹? • 𝗦𝗮𝘃𝗲𝘀 𝘁𝗶𝗺𝗲: Developers don’t have to manually write Terraform or review every change.    • 𝗥𝗲𝗱𝘂𝗰𝗲𝘀 𝗵𝘂𝗺𝗮𝗻 𝗲𝗿𝗿𝗼𝗿: AI checks for security or policy issues automatically.    • 𝗙𝗮𝘀𝘁𝗲𝗿 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁: Infra can be deployed within minutes of pushing code.    • 𝗦𝗰𝗮𝗹𝗲𝘀 𝗲𝗮𝘀𝗶𝗹𝘆: This can run across many projects without extra effort. 𝗡𝗼𝘁𝗲: This is a conceptual design, a glimpse into what’s possible. It’s not a production-ready solution, but a prototype to explore AI’s role in DevOps automation. Still, the building blocks exist today, and we’re closer than ever to making this real. How would you improve this? Let's ideate in the comments 👇 #DevOps #AIagents #InfrastructureAsCode #LLM #AgenticAI

Explore categories