How to Use Agentic AI for Better Reasoning

Explore top LinkedIn content from expert professionals.

Summary

Agentic AI refers to artificial intelligence that goes beyond simple text generation by reasoning, planning, using tools, and autonomously interacting with environments to achieve set goals. Using agentic AI for better reasoning means designing AI agents and systems that can break down complex tasks, analyze outcomes, and consistently deliver reliable solutions.

  • Prioritize structured context: Carefully select and organize information and prompts for your AI agents so they focus only on what supports their current reasoning steps.
  • Build with autonomy in mind: Allow agents to plan, take action, and analyze results while balancing automated workflows with human oversight to maintain reliability.
  • Develop modular tools: Equip agents with specialized, simple tools for each task and clear frameworks for reasoning, memory, and orchestration to improve their problem-solving abilities.
Summarized by AI based on LinkedIn member posts
  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,684 followers

    I created this Agentic AI Learning Roadmap to help developers, architects, and innovators understand how to go from basic LLM usage → fully autonomous multi-agent systems. This roadmap breaks down everything you need to master: 1. What Agentic AI Actually Is Beyond text generation — agents reason, plan, self-evaluate, use tools, and interact with environments. 2. Core Concepts: Reasoning Loops, Memory, Planning, Autonomy Controls The shift from “responding to prompts” → “achieving goals.” 3. Frameworks Powering the Agentic Era LangGraph, CrewAI, Google A2A, Anthropics MCP, OpenAI Agents, AutoGen, FalkorDB, Vertex AI Agents, and more. 4. Full Agentic AI Development Stack LLMs → Tooling Layer → Knowledge Layer → Execution Layer. A true systems-engineering approach, not just prompt engineering. 5. Agent Design Patterns ReAct Agents, Planner–Executor, Self-Reflective Agents, Tool-Use Agents, Social Agents, Environment-Aware Agents. 6–8. How to Build & Scale Agentic Systems From defining goals → enabling reasoning → using APIs → adding autonomy → orchestrating multi-agent workflows. 9. Evaluating Agent Performance Success rates, hallucination control, memory effectiveness, safety layers, cost/latency metrics. 10. Learning Resources I curated the best starting points from OpenAI, Google, MCP docs, LangGraph, NVIDIA, Kaggle, Stanford/MIT, and more. Why I built this: Most people know what agents are. Very few know how to design, test, scale, and productionize real agentic systems. This roadmap gives you a complete mental model — from fundamentals → frameworks → deployment → multi-agent orchestration.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    AI Infrastructure Product Leader | Scaling GPU Clusters for Frontier Models | Microsoft Azure AI & HPC | Former AWS, Amazon | Startup Investor | Linkedin Top Voice | I build the infrastructure that allows AI to scale

    228,970 followers

    “Building AI agents” This is the new trend But very few know what it actually takes to run them in production. Being an Agentic AI Engineer isn’t just about calling an LLM and adding tools. It’s about designing systems that can reason, act, recover from failure, and improve over time. This cheat sheet breaks the role into the real building blocks: You start with Python - async workflows, APIs, data pipelines, and clean project structure. This is the foundation for everything agents do. Then come APIs and integrations, where agents connect to real systems using authentication, retries, rate limits, and agent-friendly endpoints. RAG and vector databases give agents memory beyond context windows - handling ingestion, embeddings, semantic search, re-ranking, metadata filtering, and knowledge refresh. Security matters early: sandboxing, permissions, secrets management, prompt-injection defense, and audit logs are non-negotiable once agents touch real data. Observability tells you what your agents are actually doing in production - traces, logs, latency, token usage, errors, and behavioral drift. LLMOps keeps everything running at scale: prompt versioning, model routing, fallbacks, cost optimization, and continuous improvement. System design turns prototypes into platforms: queues, background workers, stateless vs stateful agents, failure handling, and horizontal scaling. Cloud makes it real: containers, environments, secrets, monitoring, and cost-aware deployments. Agent frameworks structure reasoning itself — planning loops, task decomposition, tool calling, multi-agent coordination, memory, and reflection. Evaluation closes the loop: task success metrics, hallucination detection, tool accuracy, and human feedback. And finally, product thinking ties it all together - solving real user problems, defining agent responsibilities, keeping humans in the loop, and iterating toward outcomes. The takeaway: Agentic AI is not a single tool or framework. It’s a full-stack discipline spanning engineering, infrastructure, operations, safety, and product. If you want to build agents that actually work in the real world - this is the roadmap.

  • View profile for Cameron R. Wolfe, Ph.D.

    Research @ Netflix

    23,758 followers

    AI agents are widely misunderstood due to their broad scope. To clarify, let's derive their capabilities step-by-step from LLM first principles... [Level 0] Standard LLM: An LLM takes text as input (prompt) and generates text as output, relying solely on its internal knowledge base (without external information or tools) to solve problems. We may also use reasoning-style LLMs (or CoT prompting) to elicit a reasoning trajectory, allowing more complex reasoning problems to be solved. [Level 1] Tool use: Relying upon an LLM’s internal knowledge base is risky—LLMs have a fixed knowledge cutoff date and a tendency to hallucinate. Instead, we can teach an LLM how to use tools (by generating structured API calls), allowing the model to retrieve useful info and even solve sub-tasks with more specialized / reliable tools. Tool calls are just structured sequences of text that the model learns to insert directly into its token stream! [Level 2] Orchestration: Complex problems are hard for an LLM to solve in a single step. Instead, we can use an agentic framework like ReAct that allows an LLM to plan how a problem should be solved and sequentially solve it. In ReAct, the LLM solves a problem as follows: 1. Observe the current state. 2. Think (with a chain of thought) about what to do next. 3. Take some action (e.g., output an answer, call an API, lookup info, etc.). 4. Repeat. Decomposing and solving problems is intricately related to tool usage and reasoning; e.g., the LLM may rely upon tools or use reasoning models to create a plan for solving a problem. [Level 3] Autonomy: The above framework outlines key functionalities of AI agents. We can make such a system more capable by providing a greater level of autonomy. For example, we can allow the agent to take concrete actions on our behalf (e.g., buying something, sending an email, etc.) or run in the background (i.e., instead of being directly triggered by a user’s prompt). AI agent spectrum: Combining these concepts, we can create an agent system that: - Runs asynchronously without any human input. - Uses reasoning LLMs to formulate plans. - Uses a standard LLM to synthesize info or think. - Takes actions in the external world on our behalf. - Retrieves info via the Google search API (or any other tool). Different tools and styles of LLMs provide agent systems with many capabilities-the crux of agent systems is seamlessly orchestrating these components. But, an agent system may or may not use all of these functionalities; e.g., both a basic tool-use LLM and the above system can be considered “agentic”.

  • View profile for Ashpreet B.

    Founder @ Agno • Building Agents that Learn.

    19,798 followers

    🌶️ Hot take: The only way Autonomous Multi-Agent Systems work is by adding Agentic Reasoning & Context. I've tried it all, and here are my learnings👇 At Agno we've been building multi-agent systems for almost 2 years using the handoff/transfer pattern that is becoming popular now. (Spoiler Alert: It doesnt work) There are two approaches to multi-agent systems: - Autonomous: A leader Agent orchestrates member Agents to achieve the task. The developer builds the Team & Agents and lets the leader Agent solve the task. - Controlled: The developer defines the Teams, Agents, and workflow steps needed to accomplish the task. This requires substantial effort. Because our clients demand reliability, we have traditionally guided them toward controlled workflows. It has been the only way to achieve consistent outputs from multi-agent systems. Many AI influencers have built their reputations selling the Autonomous pattern. After all, we all want this utopia — write some agents, assign them roles, assemble them into a team, and voilà — they'll cure cancer. But this doesn't work. We know it, and deep down, they know it too. If this "Autonomous" pattern doesn't work reliably with humans, how can it possibly work with next-token-predictors? Autonomous Multi-Agent systems create impressive demos, but when you run the same task 10,000 times, the output variance is far too high for production use. Ask yourself: If you had an add(x, y) function and ran add(1, 1) five times with results like 1.7, 2.2, 2.1, 1.8, and 2.0, would you deploy it? No—you'd make five demos and share only the one where add(1, 1) returns exactly 2, ignoring the rest. However, recent research is changing this. Anthropic’s "ThinkTool" was a breakthrough (imo). We've extended this research, teaching Agents not only to "Think" but also to "Analyze." Adding these "ReasoningTools" to agent teams is significantly improving outcomes. By adding `Reasoning` to Multi-Agent Systems: The Team leader first "plans" the task using the "Think" tool, orchestrates member Agents, and then evaluates the results using the "Analyze" tool. This approach is changing the game. Autonomous Agent Teams can now, consistently solve complex problems with low variance for the first time. Check out the `Think` -> `Orchestrate` -> `Analyze` pattern in action, this is a fairly hard task so you know we're not playing here. (Note: I trimmed the video and playback is at 1.8x - please run this yourself to test) The problem here isnt response quality of the response, that we can improve. The problem is reliability and variance. Till now, running these systems produced wildly inconsistent results. But with the `Analyze` step, the Team Leader is much better at orchestration and analyzes before returning the final result -- which we're seeing greatly improves reliability, or in other terms - reduces variance. Thank you for reading, if you liked this, give Agno a try: https://agno.link/gh

  • View profile for Sohrab Rahimi

    Director, AI/ML Lead @ Google

    23,607 followers

    For years now, prompt engineering shaped how people worked with large language models. It was about finding the right phrasing to get predictable outputs. That approach worked for small tasks, but as models turned into agents that plan, use tools, and retain memory, the limits became obvious. One of Anthropic’s latest articles “𝘌𝘧𝘧𝘦𝘤𝘵𝘪𝘷𝘦 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 𝘧𝘰𝘳 𝘈𝘐 𝘢𝘨𝘦𝘯𝘵𝘴”, introduces the next phase in this evolution, called context engineering. It explains that success now depends on how well we manage what goes inside the model’s attention window rather than how we word instructions. Anthropic describes context as everything the model sees while reasoning, including prompts, data, retrieved results, tool outputs, and message history. Every token consumes a portion of the model’s attention, and as the window expands, its focus gradually weakens. The new challenge is to curate that space carefully. Below are the main lessons from Anthropic’s work that stand out for anyone building practical AI systems. 1. Treat context as a limited resource. Adding more information does not improve accuracy. Use only what directly supports the current reasoning step. 2. Write system prompts like structured briefs. Divide them into clear parts for background, instructions, tools, and expected output. 3. Build small, distinct tools. Each tool should solve one problem and return compact, unambiguous results. 4. Use a few canonical examples instead of long lists of edge cases. Examples should teach reasoning, not overwhelm the model with detail. 5. Retrieve data just in time rather than all at once. Lightweight references such as file paths or queries keep the model’s focus clear. 6. Compact long interactions. Summarize the conversation and restart with the essentials so that the model stays coherent over long sessions. 7. Store information outside the context window. Structured notes or state files help maintain continuity across projects. 8. Use sub-agents for large tasks. Specialized agents can work on details while a coordinator manages direction and synthesis. 9. Balance autonomy with reliability. Some data should stay fixed for consistency, while other parts can be fetched dynamically when needed. 10. Focus attention on signal, not volume. Every token should contribute to the next action or decision. Prompt writing will still matter, but the real skill now lies in shaping context and deciding what enters the model, what stays out, and how information evolves as the agent works. The next generation of LLM Agents will depend less on clever wording and more on precise design of memory, retrieval, and context. Context engineering is becoming the foundation for reliable agents that think and act across long horizons with consistency and purpose.

  • View profile for Sivasankar Natarajan

    Technical Director | GenAI Practitioner | Azure Cloud Architect | Data & Analytics | Solutioning What’s Next

    16,686 followers

    𝐑𝐀𝐆 𝐢𝐬 𝐞𝐯𝐨𝐥𝐯𝐢𝐧𝐠… 𝐅𝐚𝐬𝐭. We’re moving beyond static document retrieval into systems that reason, plan, and adapt. This shift marks the rise of Agentic RAG, and it changes how AI delivers value. 𝐓𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐀𝐆 (𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧): 1. A user asks a question 2. The system fetches relevant documents 3. Relevant documents set the context 3. The LLM responds with a summary It’s efficient for factual Q&A but not built for reasoning or dynamic problem solving. Think of it as handing a model a textbook and asking for a summary. Functional but shallow. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆 𝐢𝐬 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭. You don’t ask a question. You give it a Goal. The system understands, retrieves, reasons, plans, uses tools, and loops back until it converges on a meaningful outcome. Introduces new capabilities like: - Goal decomposition - Planning and Memory - Iterative Retrieval with Tool use - Feedback driven improvement This is no longer just "retrieve and summarize." It is cognitive orchestration for enterprise scale intelligence. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: “Evaluate the best go-to-market strategy for launching our SaaS product in North America.” - 𝐓𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐀𝐆: Retrieves blog posts and reports, then Summarizes them. - 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆: Breaks down the objective into sub goals, analyzes regional market trends, customer behavior, competitor moves, pricing models, and regulatory factors, then returns a strategy recommendation tailored to your product and region. Agentic RAG opens up Intent Driven Reasoning. LLMs that act with purpose, not just respond to prompts. 𝐇𝐚𝐯𝐞 𝐲𝐨𝐮 𝐬𝐭𝐚𝐫𝐭𝐞𝐝 𝐛𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆? #AgenticRAG #LLM #RAG

  • View profile for Dr. Rishi Kumar

    SVP, Transformation & Value Creation | Enterprise AI Adoption | Strategy, Product, Platform & Portfolio Leadership | Governance & Growth | Retail · Healthcare · Tech | $1B+ Value Delivered | Bestselling Author

    16,188 followers

    𝗧𝗵𝗲 𝟳 𝗦𝘁𝗮𝗴𝗲𝘀 𝗼𝗳 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗠𝗮𝘀𝘁𝗲𝗿𝘆 — 𝗙𝗿𝗼𝗺 𝗖𝘂𝗿𝗶𝗼𝘀𝗶𝘁𝘆 𝘁𝗼 𝗦𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺𝘀 AI Agents are are becoming the backbone of intelligent automation in enterprises, startups, and personal workflows. But developing agentic systems isn’t a one-step task. It’s a structured evolution, and here's a clear roadmap to guide that journey: 𝗟𝗲𝘃𝗲𝗹 𝟭: 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗪𝗵𝗮𝘁 𝗮𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗜𝘀 Start with the basics: What makes an AI agent different from a chatbot or API? Stateless vs. stateful agents Understanding perception-action loops Single-agent vs. multi-agent logic  • Use cases: Guided chatbots, query bots, and task automation  • Tools: ChatGPT, Claude, Perplexity, ReAct, Hugging Face Spaces 𝗟𝗲𝘃𝗲𝗹 𝟮: 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 & 𝗥𝗼𝗹𝗲 𝗗𝗲𝘀𝗶𝗴𝗻 Shape how your agent responds, reasons, and behaves: Master zero-shot and few-shot prompts Design role-based agents Apply prompt chaining and task-specific templates  • Use cases: Research agents, content generators, email writers  • Tools: AIPRM, OpenAI Playground + PromptLayer, FlowGPT 𝗟𝗲𝘃𝗲𝗹 𝟯: 𝗔𝗱𝗱 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 Make agents smarter with memory: Integrate short-term and long-term memory RAG (Retrieval-Augmented Generation) Semantic chunking for better recall and relevance  • Use cases: Personal coaches, CRM bots, onboarding assistants  • Tools: LangChain Memory Modules, Weaviate, ChromaDB, Zep 𝗟𝗲𝘃𝗲𝗹 𝟰: 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲 & 𝗔𝗰𝘁𝗶𝗼𝗻 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 Agents that can do things, not just say things: Tool/function registration Web browsing, API calls, file execution Response augmentation and validation  • Use cases: Data scraping bots, email-sending agents, web-browsing AI  • Tools: OpenAI Functions, SerpAPI, ToolJunction, Plugin-enabled GPTs 𝗟𝗲𝘃𝗲𝗹 𝟱: 𝗠𝘂𝗹𝘁𝗶-𝗦𝘁𝗲𝗽 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 & 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 Now your agent plans, reflects, and self-corrects: Use TAP (task automation planning) Implement ReAct for reasoning + acting loops Handle complex task breakdown and self-evaluation  • Use cases: Business planners, customer support bots, QA systems  • Tools: AutoGen, LangGraph, MetaGPT, CrewAI, OpenAgents 𝗟𝗲𝘃𝗲𝗹 𝟲: 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 Scale with teams of agents working in sync: Shared vs. local memory Role assignment and task division Feedback loops across agents  • Use cases: Sales AI squads, design + dev teams, collaborative review bots  • Tools: CrewAI, AutoGen (multi-threaded), AgentVerse, LangChain Executors 𝗟𝗲𝘃𝗲𝗹 𝟳: 𝗕𝘂𝗶𝗹𝗱 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝘄𝗶𝘁𝗵 𝗥𝗲𝗮𝗹 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 Now you're building true autonomous AI systems: Event-based triggers Lifecycle monitoring + fallback planning Real-world system integration  • Use cases: Back-office automation, end-to-end workflows, virtual AI workers  • Tools: BnB, Superagent, LangSmith, XAgents, TaskWeaver   

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    242,205 followers

    Deloitte 𝗷𝘂𝘀𝘁 𝗿𝗲𝗹𝗲𝗮𝘀𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝘂𝗻𝗹𝗼𝗰𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀. Not every workflow needs an agent. Some are perfect. Some are a waste of time. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘀𝗼𝗺𝗲 𝘀𝗼𝗹𝗶𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗰𝗿𝗶𝘁𝗲𝗿𝗶𝗮 𝗶𝗳 𝗮 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗮𝗴𝗲𝗻𝘁𝗶𝗰: 1. Reasoning & context → Best when tasks require logic and adaptation (customer service, supply chain). → Not ideal for static analytics (segmentation, forecast models). 2. Autonomy & escalation → Works if an agent can act first, escalate if needed (incident mgmt, compliance). → Not useful for one-off tasks like code generation. 3. Clear process end → Agents should own a workflow with a defined outcome (expense verification). → Not ongoing states (digital twins). 4. Goal-oriented workflows → Focus on achieving outcomes, not just steps (resolve customer issue, procurement cycle). → Not basic automation (pre-drafted emails, CRM entry). 5. Multistep & interconnected → Strong fit if the process spans tools/systems (onboarding, claims). → Weak fit if it’s a point solution (doc comparison). 6. Cyclic & repetitive → Best when tasks repeat with learning (CV screening). → Not irregular, ad hoc analysis (attrition causes). 7. Non-explanatory → Great if no “why” explanation is needed (change request mgmt). → Poor fit where leaders demand causality (dashboards). 8. Learning potential → Ideal when feedback improves results (marketing campaigns, fraud detection). → Not static rules (email segmentation). Agentic AI isn’t about sprinkling agents everywhere. It’s about identifying workflows where: → Reasoning creates value → Autonomy reduces human bottlenecks → Multistep orchestration drives outcomes → Feedback loops improve performance over time That’s where the ROI is. 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘄𝗿𝗶𝘁𝗲 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝘄𝗵𝗶𝗹𝗲 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗮𝘁𝗰𝗵 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲, 𝗮𝗻𝗱 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗿𝗲𝗮𝗱 𝗯𝘆 𝟮𝟬𝗸+ 𝗽𝗲𝗼𝗽𝗹𝗲: https://lnkd.in/dbf74Y9E

  • View profile for Christopher Penn
    Christopher Penn Christopher Penn is an Influencer

    Co-Founder & Chief Data Scientist at TrustInsights.ai, AI Expert, AI Keynote Speaker

    47,236 followers

    Plan big, act small. If you want to use AI to its maximum power and capabilities without a maximum bill from the big tech provider of your choice, then adopt this simple tenet: plan big, act small. There are fundamentally two classes of AI model: reasoner and actor. Reasoning models are models like o3, Gemini 2.5, Claude Opus 4 Extending Thinking, DeepSeek R1, etc. These are big, complex, super smart, very expensive models. They see the big picture and can think and reason things through. A developer on Reddit noted he had Claude Opus 4 tackle one small task and was billed $8 for just that single-turn task. Actor models are models like Qwen 3 30B-A3B, Gemini 2.5 Flash, GPT-4o-mini, etc. Theses are light, fast, cheaper models. They're not as smart. They're not as capable. But they also won't send you a bill for thousands of dollars for moderate to heavy usage, either. So how do we use these in concert? With agentic systems. Use the big models in their native web interfaces to help you think and plan. Develop requirements, build work plans, debug the entire code base at once, critique all your marketing data - let the big models do their thinking in the all-you-can-eat-for-$20/month interface. Then take their outputs, their work plans, their instructions within agentic systems like Cline, Cursor, Windsurf, n8n, Zapier, GPTs, Gems, etc. and let the smaller actor models do the typing. That's essentially what they're doing - the implementation. They're doing the copy/paste, the retyping the code, the formatting - all stuff you would have done manually. This process also saves time by keeping the small actor model from chasing its tail. Yesterday at #MarketingAnalyticsSummit I built a FastMCP server for SEO in front of Wil Reynolds in under 30 minutes. How? With this process - plan big, act small. The initial requirements development took 15 minutes. The workplan initial build took 5. Debugging took 10, and by the end, I had created a toy SEO MCP server. If you adopt this process and mindset, you'll get to high-value output with AI much faster. Plan big, act small. #AI #GenerativeAI #GenAI #ChatGPT #ArtificialIntelligence #LargeLanguageModels #MachineLearning #IntelligenceRevolution

  • View profile for Femke Plantinga

    Making AI simple and fun ✨ Growth at Slite (Super.work)

    26,771 followers

    Is your RAG system a paperweight? It is if it can't handle a simple follow-up question. Building basic RAG is easy. The real challenge is engineering systems that go beyond simple retrieval and actually do more with your data. This is how you build a RAG pipeline that can think, reason, and adapt. Here's how advanced 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 transform your RAG pipeline: 𝗖𝗵𝗮𝗶𝗻 𝗼𝗳 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 (𝗖𝗼𝗧): Instead of jumping to an answer, the model breaks down complex queries into intermediate steps, making the LLM "show its work." 𝗧𝗿𝗲𝗲-𝗼𝗳-𝗧𝗵𝗼𝘂𝗴𝗵𝘁𝘀 (𝗧𝗼𝗧): Takes reasoning further by exploring multiple paths simultaneously. The system generates several potential solutions and evaluates which is most promising. This is how you systematically weigh different pieces of evidence from multiple retrieved documents. 𝗥𝗲𝗔𝗰𝘁 (𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 + 𝗔𝗰𝘁𝗶𝗻𝗴): This framework lets the system *think* and *act* dynamically. It can reason about what information it needs, act to retrieve it, and then reason again based on what it found. But beyond prompting, you need to fix your retrieval, too. 𝗤𝘂𝗲𝗿𝘆 𝗥𝗲𝘄𝗿𝗶𝘁𝗶𝗻𝗴 & 𝗘𝘅𝗽𝗮𝗻𝘀𝗶𝗼𝗻: Before hitting your vector database, change that vague user question into something your retrieval system can actually work with. This isn't just about synonyms, it's about understanding intent. 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀: Move beyond basic search. Think hybrid search (combining similarity and keyword), metadata filtering, and multi-step retrieval for complex queries. This is where it really gets interesting: 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗔𝗚. AI agents can reformulate queries on the fly, re-retrieve information if initial results miss the mark, and handle queries requiring multi-step reasoning across multiple documents. Moving from a one-shot solution with no reasoning into an intelligent system that thinks, reasons, and adapts. Ready to go from basic RAG to a reasoning engine? We cover these techniques and more in our ebook on Advanced RAG Pipelines. 𝗗𝗼𝘄𝗻𝗹𝗼𝗮𝗱 𝘆𝗼𝘂𝗿 𝗳𝗿𝗲𝗲 𝗰𝗼𝗽𝘆 𝗵𝗲𝗿𝗲: https://lnkd.in/dita7QCD

Explore categories