AI in Coding and Development

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,471,282 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Henry Shi
    Henry Shi Henry Shi is an Influencer

    AI@Anthropic | Co-Founder of Super.com ($200M+ revenue/year) | LeanAILeaderboard.com | Angel Investor | Forbes U30

    78,525 followers

    I tried EVERY major AI Coding tool so you don’t have to. Here’s what I learned about each one - and which one’s the best for your particular use case 👇 After an entire weekend of hands-on testing 15+ AI coding assistants, building the same real-life application (tax comparison calculator), and documenting every step - here's the comprehensive breakdown to separate the signal from the noise: 🏆 Best Overall: Cline - 100% open source and free version of Cursor + Windsurf that’s a simple VS Code extension - Truly thoughtful agentic coding with extensive tool use (terminal, computer use, websites, etc) - Wrote the best code with fewer mistakes, better self-healing, but no inline chat 🎨 Best for Non-Technical Users: Vercel V0 - Fast, Easy, intuitive UX - Strong community and templates - Component-specific editing via AI is magical ⚡Best for Quick Prototypes: Anthropic Claude 3.5 Sonnet - Fast & clean responses - Great reasoning & logic clarity - Artifact is great for prototyping, with ability to publish and share Replit: Good for full-stack cloud development, but sits in an awkward spot—too complex for beginners, too constrained for advanced users. StackBlitz Bolt.new: A standard cloud IDE with AI codegen, but nothing special. Lovable: Similar to Bolt, but unreliable AI-generated code, hard to toggle/see code. Cursor: Great Copilot alternative, but lacks extensive agentic capabilities like Cline. Codeium Windsurf: Strong agent mode but agent was sometimes lazy and incomplete. GitHub Copilot: Good for simple inline edits, but lacks full agentic workflow (though an agent mode was recently released). Aider: Terminal & keyboard only. Feels like Vim/Emacs on steroids. Too hardcore. OpenHands: Open-source and free Cognition Devin with strong agentic coding, but SaaS version is unstable. OpenAI (o3-mini-high): Good logic depth but lacks a coding canvas. Anthropic (Claude 3.5 Sonnet): Fast + clean. Artifact is great for prototypes, but can’t edit code directly inside it. Google Gemini 2: Poor experience—lazy, incomplete code. Generated separate files that I had to manually combine. DeepSeek AI R1: Strong long reasoning chains, but gets a lot of logic wrong. Tempo (YC S23): Promising PRD → Design → Code → Deploy workflow, but still in early stages. Onlook: Strong for design-first workflows but inconvenient for direct code editing. Reweb: Generates only UI components, not code with logic. My Final Recommendations: - For non-technical users: Vercel V0 is the best no-code/low-code option. - For cloud-based development: Try Bolt. - For local AI-powered coding: Cline is free and outperforms Cursor/Codeium. - For rapid prototyping: Claude 3.5 Sonnet is fast and effective. - For designers: Tempo or Onlook provide a strong UI-first workflow. Do you want to see a full write up of my AI coding experiences? Let me know if I should make a full post comparing AI Coding tools in detail by sharing this post and commenting below.

  • View profile for Saranyan Vigraham

    Tech guy

    5,390 followers

    I’ve been running a quiet experiment: using AI coding (Vibe Coding) across 10 different closed-loop production projects — from minor refactors to major migrations. In each, I varied the level of AI involvement, from 10% to 80%. Here’s what I found: The sweet spot? 40–55% AI involvement. Enough to accelerate repetitive or structural work, but not so much that the codebase starts to hallucinate or drift. Where AI shines: - Boilerplate and framework code - Large-scale refactors - Migration scaffolds - Test case generation Where it stumbles: - Complex logic paths - Context-heavy features - Anything requiring real systems thinking [and new architectures etc]. - Anything stateful or edge-case-heavy I tracked bugs and % of total dev time spent fixing AI-generated code across each project. Here's the chart. My learning is that: overreliance on AI doesn’t just plateau, it backfires. AI doesn't write perfect code. The future is a collaboration, not a handoff. Would love to hear how others are navigating this balance. #LLM #VibeCoding #AI #DeveloperTools #Dev

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    242,187 followers

    Anthropic 𝗷𝘂𝘀𝘁 𝗿𝗲𝗹𝗲𝗮𝘀𝗲𝗱 𝗮 𝗱𝗲𝗻𝘀𝗲 𝗮𝗻𝗱 𝗵𝗶𝗴𝗵𝗹𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗵𝗼𝘄 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 — 𝗽𝗮𝗰𝗸𝗲𝗱 𝘄𝗶𝘁𝗵 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀: ⬇️ Not just marketing, BUT a real, practical blueprint for developers and teams building AI agents that actually work. It explains how Claude Code (tool for agentic coding) can function as a software developer: writing, reviewing, testing, and even managing Git workflows autonomously. BUT in my view: The principles and patterns described in this document are not Claude-specific. You can apply them to any coding agent — from OpenAI’s Codex to Goose, Aider, or even tools like Cursor and GitHub Copilot Workspace. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 7 𝗸𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗼𝗿 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗯𝗲𝘁𝘁𝗲𝗿 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 — 𝘁𝗵𝗮𝘁 𝘄𝗼𝗿𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝘄𝗼𝗿𝗹𝗱: ⬇️ 1. 𝗔𝗴𝗲𝗻𝘁 𝗱𝗲𝘀𝗶𝗴𝗻 ≠ 𝗷𝘂𝘀𝘁 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 ➜ It’s not about clever prompts. It’s about building structured workflows — where the agent can reason, act, reflect, retry, and escalate. Think of agents like software components: stateless functions won’t cut it. 2. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗶𝘀 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 ➜ The way you manage and pass context determines how useful your agent becomes. Using summaries, structured files, project overviews, and scoped retrieval beats dumping full files into the prompt window. 3. 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 𝗶𝘀𝗻’𝘁 𝗼𝗽𝘁𝗶𝗼𝗻𝗮𝗹 ➜ You can’t expect an agent to solve multi-step problems without an explicit process. Patterns like plan > execute > review, tool use when stuck, or structured reflection are necessary. And they apply to all models, not just Claude. 4. 𝗥𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗮𝗴𝗲𝗻𝘁𝘀 𝗻𝗲𝗲𝗱 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘁𝗼𝗼𝗹𝘀 ➜ Shell access. Git. APIs. Tool plugins. The agents that actually get things done use tools — not just language. Design your agents to execute, not just explain. 5. 𝗥𝗲𝗔𝗰𝘁 𝗮𝗻𝗱 𝗖𝗼𝗧 𝗮𝗿𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀, 𝗻𝗼𝘁 𝗺𝗮𝗴𝗶𝗰 𝘁𝗿𝗶𝗰𝗸𝘀 ➜ Don’t just ask the model to “think step by step.” Build systems that enforce that structure: reasoning before action, planning before code, feedback before commits. 6. 𝗗𝗼𝗻’𝘁 𝗰𝗼𝗻𝗳𝘂𝘀𝗲 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝘆 𝘄𝗶𝘁𝗵 𝗰𝗵𝗮𝗼𝘀 ➜ Autonomous agents can cause damage — fast. Define scopes, boundaries, fallback behaviors. Controlled autonomy > random retries. 7. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝘃𝗮𝗹𝘂𝗲 𝗶𝘀 𝗶𝗻 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 ➜ A good agent isn’t just a wrapper around an LLM. It’s an orchestrator: of logic, memory, tools, and feedback. And if you’re scaling to multi-agent setups — orchestration is everything. Check the comments for the original material! Enjoy! Save 💾 ➞ React 👍 ➞ Share ♻️ & follow for everything related to AI Agents!

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    627,896 followers

    I recently sat down with Erran Berger, VP of Product Engineering at LinkedIn, to discuss a question that’s on every developer’s mind: How is AI actually changing the way we build software? We’re moving past the "AI will write all the code" hype and into a much more interesting reality. The role of the software engineer isn't disappearing; it’s being elevated. 🤌 TL;DR from the conversation: 1/ Systems Thinking > Syntax: As AI handles more of the boilerplate, the value of an engineer shifts toward orchestration and high-level architecture. 2/ The "Human Editor": AI can generate solutions, but human judgment remains the final (and most critical) line of defense for security, ethics, and performance. 3/ Solving Technical Debt: One of the most exciting use cases Erran shared was using AI to refactor legacy systems—turning a months-long headache into a manageable project. 4/ New Must-Have Skills: If you aren't already looking into RAG, LLMOps, and Vector Databases, now is the time to start. The goal isn't just to write code faster; it's to make engineering "joyful" again by removing the friction and focusing on pure problem-solving. Watch the full episode here: https://lnkd.in/gEJb4jdz Thank you, LinkedIn team for inviting me over, for this incredibly insightful conversation 🫶

  • View profile for Stanislas Niox-Chateau
    Stanislas Niox-Chateau Stanislas Niox-Chateau is an Influencer

    CEO & Cofounder at Doctolib

    65,955 followers

    I was convinced AI would transform how we build software. I did not expect it to happen so fast. Over the past year, through conversations with leaders like Thomas Dohmke, startups in the AI software development space, working with the Anthropic team, and observing our own builders at Doctolib, one thing has become clear to me. AI is changing how we think about building software like nothing before. Specs turn into working prototypes instantly. Design systems and architecture principles are continuously reinforced by the tooling itself. Writing production-ready code from scratch is no longer our bottleneck. Tests are generated automatically to validate intent. Complex refactoring is handled by autonomous agents. And this is accelerating. As Ethan Mollick once said: "The AI we use today is the worst AI we will ever use.” Better models enable more capable agent fleets and higher autonomy, which in turn drive even better models As tech builders, our day-to-day job is changing… We don’t focus as much on manual implementation, writing boilerplate, or debugging line by line. Instead, we design the systems and scaffolding that allow AI to do reliable work. We orchestrate agents with the right intents, we validate AI-generated architectures, and we define strict quality guardrails. ….but the outcome doesn’t change: creating better technologies for our users. This is a strong opportunity for all tech companies to innovate faster, but for us even more so in view of the specificities of healthcare and the quality of our technologies and teams. 🔹 AI will help us create more value for our health professionals and anyone managing their health. 🔹 AI will help us tackle all user feedback, bugs and incidents in minutes. 🔹 AI will make us launch more specialties and more countries faster. At Doctolib, we're going all-in on this transformation. Dozens of specialized agents deployed. Our engineering leaders are driving this change, committing code 5x more frequently than a year ago. Teams already deliver significantly more value to patients and health professionals. If you want to join that revolution and contribute to reinventing the daily life of health professionals and improving health for everyone, we welcome all builders. It's only the beginning.

  • View profile for Bhavishya Pandit

    Turning AI into enterprise value | $XX M in Business Impact | Speaker - MHA/IITs/NITs | Google AI Expert (Top 300 globally) | 50 Million+ views | MS in ML - UoA

    85,272 followers

    97% of orgs faced AI breaches in 2025 had zero access controls in place. Not weak; Not outdated controls. Zero [Source: IBM] Meanwhile, 35% of real-world AI security incidents came from simple prompts some causing $100K+ in losses without a single line of code [Source: Adversa] The gap between AI deployment speed and security implementation is only widening. Hence I am sharing 10 security checkpoints every AI agent needs before touching production systems: ✅ Output Validation → Middleware that verifies decisions against rules before execution. Traffic lights for AI actions. ✅ Access Control → Least privilege enforcement. Role-based permissions that limit what agents can touch. ✅ Credential Safety → Secrets management that keeps API keys away from prompts and logs. Store them like vault keys, not sticky notes. The other 7 checks are in the carousel including rate limiting that prevents runaway loops and human approval for high-stakes decisions 👇 Most teams rush deployment. Security becomes an afterthought until something breaks. Tell me your story: what security measure has prevented a disaster in your AI system? Follow me, Bhavishya Pandit, for practical AI production insights from the trenches 🔥 #ai #security #agents

  • View profile for Deepika Khanna

    From zero to certified Salesforce professional | Helping beginners & career-switchers break into the Salesforce ecosystem | Self-paced Courses • Cert Prep • 1:1 Mentorship

    21,792 followers

    Everyone keeps asking me the same question: “Is my Salesforce Developer job at risk because of AI?” Let me make this simple: Your job isn’t at risk. But your skill set might be. AI isn’t coming for Salesforce developers. AI is coming for developers who still write code the same way they did in 2018. Here’s the truth nobody wants to say: The developers who learn AI → will replace the developers who don’t. And Salesforce is moving faster than ever: Agentforce. Prompt Builder. Data 360. AI-powered development environments. Automations that write half your boilerplate code for you. This isn’t the end of Salesforce development. This is the biggest opportunity we’ve had in a decade. New Roles. New Skills. New Money. • AI-enhanced automation designers • Prompt + Agent builders • Data Cloud + AI orchestration specialists • Integration developers who use AI to deliver 5x faster • Devs who can blend Apex, metadata, and intelligence into real business outcomes So… Will developer demand drop? Absolutely not. Companies don’t want fewer developers. They want developers who can ship faster, smarter, and more intelligently — and AI is the amplifier. If you evolve, you’ll be more in demand. If you ignore AI, you’ll be… well, replaceable. The future is hybrid: You + AI. Learn it. Leverage it. Lead with it.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,630 followers

    Let's cut to the chase: GenAI project complexity can quickly spiral out of control. Here's a project structure that keeps things clean, maintainable, and scalable: Key components and their benefits: 1. Modular 'src/' Directory: - Separates concerns: prompts, LLM integration, data handling, inference, utilities - Enhances code reusability and testing - Simplifies onboarding for new team members 2. 'configs/' for Environment Management: - Centralizes configuration, reducing hard-coded values - Facilitates easy switching between development, staging, and production environments - Improves security by isolating sensitive data (e.g., API keys) 3. Comprehensive 'tests/' Structure: - Distinguishes between unit and integration tests - Encourages thorough testing practices - Speeds up debugging and ensures reliability, crucial for AI systems 4. 'notebooks/' for Experimentation: - Keeps exploratory work separate from production code - Ideal for prompt engineering iterations and performance comparisons 5. 'docs/' for Clear Documentation: - Centralizes key information like API usage and prompt strategies - Crucial for maintaining knowledge in rapidly evolving AI projects This structure aligns with the principle "Explicit is better than implicit." It makes the project's architecture immediately clear to any developer jumping in. Question for the community: How do you handle versioning of models and datasets in your AI projects?

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (30k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    31,501 followers

    𝐀𝐈 𝐀𝐝𝐨𝐩𝐭𝐢𝐨𝐧 𝐢𝐧 𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞𝐬: 𝐅𝐨𝐮𝐫 𝐋𝐞𝐯𝐞𝐥𝐬 𝐟𝐫𝐨𝐦 𝐂𝐮𝐫𝐢𝐨𝐬𝐢𝐭𝐲 𝐭𝐨 𝐀𝐈-𝐍𝐚𝐭𝐢𝐯𝐞 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 Most companies think they are further along in AI adoption than they actually are.  This framework maps four distinct levels and being honest about where you sit is the first step to moving up. LEVEL 1: INDIVIDUAL USAGE (Curiosity-Driven) Goal: Individuals experiment with AI to save time. • Quick Tasks: Used for emails, brainstorming, and summaries • No AI Strategy: No formal company policy or direction • Personal Tools: Employees use different AI tools individually • Manual Workflows: Outputs are copied manually between tools • Early Exploration: High curiosity but inconsistent results • No Data Governance: Sensitive data may be shared without safeguards LEVEL 2: TEAM-LEVEL EXPERIMENTATION (Process Exploration) Goal: Teams begin applying AI to real work processes. • AI Content Creation: Used for emails, posts, reports, and documents • Meeting Automation: AI summarizes meetings and extracts action items • Workflow Automation: Simple AI chains automate repetitive tasks • AI Research Support: Helps analyze competitors and summarize reports • Tool Consolidation: Teams narrow down to a few preferred AI tools • Manager-Driven Adoption: Leaders encourage AI adoption LEVEL 3: DEPARTMENTAL AI INTEGRATION (Structured + Scalable) Goal: AI use becomes standardized across teams. • AI Playbooks: Defined workflows for each department • Data Pipelines: Clean, structured data feeds AI systems • Prompt Libraries: Shared prompts ensure consistent results • AI Team Champions: Each team has someone responsible for AI adoption • Security Controls: Data protection policies and tool vetting in place • ROI Tracking: Teams measure productivity gains and cost savings LEVEL 4: AI-NATIVE OPERATIONS (Autonomous + Self-Improving) Goal: AI is embedded in every workflow and continuously improves. • AI-Driven Decisions: AI guides strategy, hiring, pricing, forecasting • Connected AI: AI systems across teams work together automatically • Self-Learning: Models improve continuously using new data • AI Governance: Policies ensure ethical and secure AI use • Custom Models: Internal data trains specialized AI models • Revenue from AI: AI creates new products and services MY RECOMMENDATION At Level 1: Establish an AI strategy and basic data governance immediately. At Level 2: Consolidate tools and appoint AI champions per team. At Level 3: Build data pipelines and prompt libraries before scaling further. At Level 4: Focus on connected AI systems and self-learning loops. Which level best describes your organization right now? ♻️ Repost this to help your network get started ➕ Follow Anurag(Anu) Karuparti for more PS: If you found this valuable, join my weekly newsletter where I document the real-world journey of AI transformation. ✉️ Free subscription: https://lnkd.in/exc4upeq #EnterpriseAI #AgenticAI #AIGovernance

Explore categories