Have you heard of 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗔𝘀𝘀𝗲𝗺𝗯𝗹𝘆 (𝗗𝗖𝗔) in 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴? It is the practice of constructing the model’s input ON DEMAND from multiple context sources—based on the user goal, current task state, tool outputs, risk level, and token budget. Static prompts assume the world is stable. Real systems aren’t, especially in Agentic AI. 1. Users change their mind mid-flight 2. Tools return surprises 3. Policies differ by tenant/workflow 4. Long-horizon tasks need stepwise context, not one giant dump DCA turns context into a living artifact that evolves across turns and phases. Think of your context as a bundle with explicit compartments: 1. 𝗧𝗮𝘀𝗸 𝗙𝗿𝗮𝗺𝗲 ( Goal, scope, constraints, definition of done, what’s in/out, timeline, rules ) 2. 𝗚𝗿𝗼𝘂𝗻𝗱𝗶𝗻𝗴 𝗘𝘃𝗶𝗱𝗲𝗻𝗰𝗲 ( Retrieved docs, structured records, citations, tool outputs, ranked, deduped, freshness-aware ) 3. 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗦𝘁𝗮𝘁𝗲 ( Plan, progress, decisions made, open questions, scratch summaries, checkpoints, rollback points ) 4. 𝗣𝗼𝗹𝗶𝗰𝘆 + 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 ( Safety rules, compliance requirements, PII handling, Tenant policies, workflow-specific rules ) 5. 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿 ( System/developer instructions, style, format contracts, schemas, validators ) Dynamic context assembly is the orchestrator that decides what to include, in which step, how much and in what order 𝗗𝗖𝗔 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 1. Context is a contract where each department has a purpose & budget 2. Prefer structured snippets over raw text 3. Always include provenance ( source, timestamp, confidence) for grounding content 4. Separate evidence from instructions 5. Checkpoint summaries every N steps, so state does not rot over long horizons 6. Make trimming deterministic. 7. Treat tool outputs as first-class context, but sanitize and normalize them. 𝗖𝗼𝗺𝗺𝗼𝗻 𝗮𝗻𝘁𝗶-𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 1. Shoving more docs in without reason 2. Mixing different compartments into one single blob 3. Not managing agent state 4. No freshness / authority scoring for grounding truth #ContextEngineering #AIAgents #RAG #LLMOps #GenAIOps #AgenticAI #EnterpriseAI
Making Context-Driven Coding Decisions
Explore top LinkedIn content from expert professionals.
Summary
Making context-driven coding decisions means tailoring how software and AI systems process information based on the specific goals, constraints, and environment of each task, rather than using a one-size-fits-all approach. This concept focuses on structuring, managing, and selecting the right context—such as evidence, instructions, policies, and operational state—so that coding and AI agents can respond accurately and adaptively.
- Structure your context: Break down complex documents and requirements into clear, organized components that are easy for both humans and AI tools to understand and use.
- Prioritize relevant information: Carefully select and compress the data you provide to coding agents or AI, making sure only necessary and fresh details are included for each stage of the task.
- Monitor and refine decisions: Use metrics and checkpoints to track outcomes and identify where your context needs improvement, allowing you to adjust how information is presented or retrieved for future coding tasks.
-
-
Context is the new compute For the past month, I keep circling the same word in my notebook: context. Models are racing toward million-token windows and agentic behavior, but the winners won’t be whoever ships the biggest model. They’ll be the teams that shape, select, compress, and govern context best. Why does this matter? Long context is here. GPT-4.1, Gemini 1.5, Claude 3.5 Sonnet - huge windows are the new normal. But long ≠ useful. LLMs still miss info (“lost in the middle”). RAG is still the control plane. Retrieval (and re-ranking) stays essential, at least for now. Engineers are shifting from “writing code” to “composing context.” Orchestration, not syntax, is where leverage lives. The Context Stack 1. Context Orchestrators – Policy engines that decide what to retrieve, how much, how to compress, and where to place it. Expect dynamic retrieval and critique loops. 2. Contextual Retrieval > Vanilla RAG – Vector search drops nuance. Adding structure, authorship, and graph edges improves evidence and auditability. 3. Memory Infrastructure – Agents need persistent memory with TTLs, scopes, and consent. Think “Snowflake for memory,” not a bigger vector DB. 4. Context Observability & QA – Metrics for lost needles, reranker drift, and token economics. Databricks-style RAG evals should become standard QA suites. 5. Compression & Layout – Systems that say more with fewer tokens. Expect advances in quoting, snippets, and layout to beat “dump the PDF.” 6. Context Governance – Provenance, revocation, per-segment licensing, and user-level privacy that travels with data. 7. Hardware-aware Context – Prompt caching, streaming retrieval, and pre-embedding will be table stakes as context scales. Evaluation systems need a whole different post but they change with context was well. And MCP? What MCP Is Missing: The Model Context Protocol (MCP) standardizes how models talk to tools and data—but it’s about interfaces, not judgment. - Doesn’t solve how to prioritize, compress, or govern context. - Doesn’t decide which docs to include or how to avoid “lost-in-the-middle.” - Ignores observability, provenance, or cost optimization. - Weak at carrying context across servers and interfaces. MCP defines the pipes—but not the water. The uncomfortable truth Most AI products underperform not because the model is “bad,” but because their context pipeline is. If your system can’t: (a) find the right evidence (b) compress it without losing meaning (c) place it where the model will notice it (d) prove provenance …you’re wasting tokens and shipping untrustworthy answers. The next defensible moats may just be context moats. Winners will treat context like a product surface: instrumented, optimized, and governed end-to-end.
-
Do you know the #1 killer of AI agent projects? 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝘁𝘆𝗽𝗶𝗰𝗮𝗹𝗹𝘆 𝗽𝗿𝗼𝗰𝗲𝘀𝘀 𝟭𝟬𝟬 𝘁𝗼𝗸𝗲𝗻𝘀 𝗼𝗳 𝗶𝗻𝗽𝘂𝘁 𝗳𝗼𝗿 𝗲𝘃𝗲𝗿𝘆 𝘁𝗼𝗸𝗲𝗻 𝘁𝗵𝗲𝘆 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲. The industry is slowly waking up to this fundamental truth: managing what goes into your AI agent's context window matters more than which model you're using. Yet most teams still treat context like a junk drawer - throwing everything in and hoping for the best. The result? Agents that hallucinate, repeat themselves endlessly, or confidently select the wrong tools. Several research studies show performance can crater just based on how information is presented in the context. 𝗧𝗵𝗲 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝘆𝗼𝘂𝗿 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝘀𝗶𝘇𝗲: ➤ If context < 10k tokens: → Use simple append-only approach with basic caching ➤ If 10k-50k tokens: → Add compression at boundaries + KV-cache optimization ➤ If 50k-100k tokens: → Implement offloading to external memory + smart retrieval ➤ If > 100k tokens: → Consider multi-agent isolation architecture Next, effectively leverage metrics to improve context engineering: ➤ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻-𝗹𝗲𝘃𝗲𝗹 𝘁𝗿𝗮𝗰𝗸𝗶𝗻𝗴: Use Action Completion and Action Advancement to measure overall goal achievement. These metrics tell you if your context successfully guides agents to accomplish user objectives. ➤ 𝗦𝘁𝗲𝗽-𝗹𝗲𝘃𝗲𝗹 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀: Apply Tool Selection Quality and Context Adherence to evaluate individual decisions. This reveals where your prompts fail to guide proper tool usage or context following. 𝗪𝗵𝗶𝗰𝗵 𝗮𝗻𝘁𝗶-𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗮𝘃𝗼𝗶𝗱: ❌ Loading all tools upfront degrades performance. Use dynamic tool selection or masking instead. ❌ Aggressive pruning without recovery loses critical information permanently. Follow reversible compression to maintain the ability to retrieve original data. ❌ Ignoring error messages misses valuable learning opportunities. Keeping error messages in context prevents agents from repeating the same mistakes. ❌ Over-engineering too early violates the "Bitter Lesson". Simpler, more general solutions tend to win over time as models improve. Start simple and add complexity only when proven necessary. Want to dive deeper into context engineering strategies and production-ready patterns? Read the full guide with detailed examples and benchmarks.
-
🎬 My GitHub Universe talk is live! "Your codebase, your rules: Customizing Copilot with context engineering" – 15 minutes packed with practical demos and strategies. Watch here: https://lnkd.in/gvWc8_Mk The first takeaway: Your IDE setup matters. Proper linting, language servers, and test runners aren't just for humans—they're how agents understand what's broken. Red squiggles = context. 👉 The Three-Layer Context Strategy Layer 1: Instructions – Your always-on rules of engagement - Custom Instructions: Team coding standards + repository structure (your codebase mini-map) - Domain-Specific Instructions: Architecture-specific patterns (e.g., VS Code team's observable implementation) Layer 2: Reusable Prompts – One-shot commands for repetitive tasks - Demo: Data query prompt with MCP connection (query telemetry without learning Kusto/KQL!) - Demo: Planning prompt that customizes Plan Mode for specific workflows See it in action: https://lnkd.in/geKRjps9 (.github folder) Layer 3: Custom Agents – Rich workflows with constrained tools - TDD agent demo: Automated red-green-refactor with human checkpoints - Multi-agent design exploration: Different coding perspectives in one workflow Live examples: - TDD flow: https://lnkd.in/gCGiZFtd - Design exploration: https://lnkd.in/gtzGQRwJ 🎯 NEW: Sub-agents with isolated context Let specialized agents do deep research and return only essentials—preventing context bloat in long conversations. 📚 Resources that we keep updating and expanding - Context Engineering Guide: https://lnkd.in/g4bhtphk - Planning agent docs: https://lnkd.in/gs_5vHgz The future isn't just writing code with AI—it's engineering the context that makes AI truly understand YOUR codebase, YOUR workflows, and YOUR standards. #GitHubUniverse #GitHubCopilot #VSCode #AIEngineering #DeveloperProductivity
-
One of Karpathy's latest tweets reveals something most teams miss about AI context management. We're feeding our AI tools like we're feeding humans. 𝐓𝐡𝐞 𝐩𝐫𝐨𝐛𝐥𝐞𝐦: We dump PDF docs, meeting notes, and Slack threads into AI context and wonder why the output is mediocre. 𝐁𝐮𝐭... there's a massive difference between human-readable and AI-readable information architecture. Think about your current AI development workflow: 1) You upload a PRD to Claude 2) Paste requirements into Cursor 3) Reference documentation in prompts You're asking AI to do the heavy lifting of parsing, structuring, and connecting information that was designed for human consumption, all before it can actually help you! 𝐢𝐧𝐬𝐭𝐞𝐚𝐝... → Structure first, then feed: Break complex docs into discrete, actionable components → Make relationships explicit: Don't rely on AI to infer connections between requirements, designs, and technical constraints → Create decision trees: Turn narrative requirements into structured choice points Real example: Instead of a 10-page PRD, create modular components: a) User stories with explicit success criteria b) Technical constraints as structured lists c) Decision rationale separated from implementation details 𝐒𝐨, 𝐡𝐨𝐰 𝐭𝐨 𝐩𝐥𝐚𝐧 𝐟𝐨𝐫 𝐥𝐨𝐧𝐠-𝐭𝐞𝐫𝐦 𝐢𝐦𝐩𝐚𝐜𝐭? As we integrate AI deeper into product development, the teams that win will be those who redesign their information architecture for AI consumption, not just AI interaction. Engineering the context of your product and engineering teams is what will make the difference on the impact that AI makes in your organization. If you do no effort to structure this, your internal AI rollouts WILL FAIL. Most teams are still in the "PDF-to-text" era of AI usage. How are you structuring information for your AI tools?
-
(3/3): For over 18 years of leading engineering teams, this framework has helped me navigate Speed vs Quality - and know when to choose one over the other👀 Teams usually fall into two traps: shipping garbage fast, or perfecting code nobody uses. Very few optimize for both. Context really helps you decide what to prioritize. Sometimes its the right call to ship that hacky fix TODAY. Your biggest customer is blocked? Ship it. Demo tomorrow? Ship it. But here's where people fail - they never come back for the ideal fix. That hack becomes technical debt. That debt becomes the thing that kills your velocity six months later. I’ve been using this framework to decide when to move fast and when to slow down: 4 inputs that drive speed of execution CONTEXT → Speed of Insight What actually matters right now? Customer screaming? Testing a wild idea? Building core infrastructure? Your context shapes everything. FOCUS → Speed of Decision Pick your battles. Not every feature needs to scale to 1M users. Your data layer does. METRICS → Speed of Delivery Measure what moves: cycle time, bug escape rate, time-in-dev. FREQUENCY → Speed of Impact Ship small, ship often. 10 small PRs > 1 giant PR. Deploy often. Feature flag everything. It’s brutal how most startups die from moving too slow, not from bad code. But the ones that WIN know exactly when to cut corners and when to obsess over quality. It's not always about balance. It's about knowing which extreme to pick when😃
-
𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 is not just stuffing data into prompts. It's about designing 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 smart systems that feed the right data at the right time in the right format. And we just wrote an ebook telling you exactly how to do this 👀 The difficulty with 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 is it's not just about writing prompts in certain ways, or building RAG systems, or using this type of SLM as an agent over that LLM - it's about using all of these components together so that your system overcomes the innate limitations of models. The goal isn't to shove more data into the prompt, but to design systems that make the most of the active context window. (and no, just increasing the context window size is 𝘯𝘰𝘵 going to solve this) So the core challenge really becomes 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 - how do we make this system work together seamlessly, while also being robust to both human and LLM error? This is why context engineering is going to become the number one complexity of building AI apps. You need systems that intelligently decide: • What information remains in the active context window • When to summarize or compress to save space • What to store externally and retrieve when needed • How to route queries to the right tools • How agents coordinate across specialized tasks We just released our complete ebook on Context Engineering, covering all the core components you need to turn a brilliant but isolated model into a production-ready application: 𝗔𝗴𝗲𝗻𝘁𝘀 - The orchestrators that manage information flow and make dynamic decisions 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 - Short-term and long-term storage architectures 𝗤𝘂𝗲𝗿𝘆 𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 - Rewriting, expansion, and decomposition techniques 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 - Chunking strategies and multi-source synthesis 𝗧𝗼𝗼𝗹𝘀 & 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 - The Thought-Action-Observation cycle and effective tool use The ebook includes practical examples, architectural diagrams, and real implementation strategies. No fluff, just the blueprint for building reliable AI systems that we've used ourselves in building our AI apps and frameworks. Download it here: https://lnkd.in/ecKUxarH
-
𝐕𝐢𝐛𝐞-𝐜𝐨𝐝𝐢𝐧𝐠 𝐨𝐫 𝐒𝐩𝐞𝐜-𝐝𝐫𝐢𝐯𝐞𝐧 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭? The problem with "vibe coding" is that you ask AI (like Amazon Q) to simultaneously figure out requirements, make architectural decisions, AND generate code for you. This creates several challenges: 🔸 Your context window gets used inefficiently 🔸 You lose track of decisions made in previous sessions 🔸 You get inconsistent implementation due to different assumptions 🔸 Teams become misaligned on project direction The Solution: 𝐒𝐞𝐩𝐚𝐫𝐚𝐭𝐞 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐫𝐨𝐦 𝐁𝐮𝐢𝐥𝐝 Ask Q to first create requirements, then design, then tasks. You can provide it with all the context it needs to make these decisions based on your organization's needs. This is called the 𝐬𝐩𝐞𝐜-𝐝𝐫𝐢𝐯𝐞𝐧 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝐢𝐧 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭, which delivers far better results than just typing a prompt and asking Q to write code in a "vibe coding" approach. Example prompt for Vibe-Coding Approach "I would like to build an e-commerce platform with user accounts, product catalog, shopping cart, and payment processing using Python 3 and Flask." Example prompt for Spec-Driven Approach: "I would like to build a set of requirements and designs for an e-commerce platform with user accounts, product catalog, shopping cart, and payment processing with Python 3 and Flask. Store the results in requirements.md, design.md, and tasks.md files." With a very similar prompt, Q Developer does the design work upfront and presents you with a detailed set of context to iterate on, track progress with, and use to build the final application. The difference? Structure, clarity, and sustainable development practices that scale with your team.
-
Quick tips for coding effectively with LLMs: Git history + claude.md TLDR: Great coding results come from great context. Two simple habits make coding with LLMs dramatically more useful. Pro tip: Write clear commit messages that explain the fix and why—LLMs (and teammates) will thank you. *** Details *** 1) Generate a claude.md per folder (Claude Code CLI) Create a lightweight summary for each major folder—purpose, key files, data flow, interfaces, gotchas. Drop these files in your repo and reference them in prompts. Result: the model “knows” your architecture before you ask for help. 2) Point the model at your Git history Cursor / Claude Code can read commit history. When something “mysteriously” breaks, have the model review prior fixes instead of reinventing them. *** Bonus: Don’t always ask it to “think hard.” *** If the answer already exists in your repo (a prior fix or proven pattern), tell the LLM to reuse it—save tokens, save time. *** Enjoy coding with LLMs — it’s super fun when the context is tight. What other coding context tricks do you use? Please share below 👇
-
🧠 “Your model isn’t failing. Your context is.” We’ve hit the limit of “chat with an agent, hope for the best.” In enterprise codebases, the win isn’t a smarter LLM, it’s context engineering: what the agent sees, when it sees it, and how that knowledge evolves. Here’s the playbook I’m seeing work across real teams: Treat specs as contracts, not artifacts • Research → Plan → Implement • Research = map the system (files, lines, data flows) • Plan = exact changes + tests to verify • Implement = execute with context < 40% utilized Practice intentional compaction • Persist a living “progress file” (what’s done, what’s pending, open questions) • Feed that to the next step/agent; archive the rest • Kill noisy blobs (logs, giant JSON) that don’t advance the next decision Make agents provable, not persuasive • Guardrails = unit/integration tests, CI, license checks • PRs originate from a reviewed spec; humans approve intent, agents execute • Version the context (not just code) so you can audit decisions Start where agents shine • Well-bounded subsystems, refactors, test gen, dependency upgrades • Expand scope only when review time ↓ and defect rate stays flat Hot take: “AI coding agents” will commoditize. Workflow and context will not. The orgs that win will version context like code, measure token → impact like infra, and align humans around specs, not chat transcripts. If you’re piloting agents: what broke first—tests, context, or trust? #AI #SoftwareEngineering #LLM #Agents #MLOps #DevEx #PlatformEngineering #EnterpriseAI #ContextEngineering
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development