Two strikingly similar headlines surfaced this past week that should make every leader pause: • “Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off.” — New York Times • “Companies Are Pouring Billions Into AI. Here’s Why They’re Not Seeing Returns” — Forbes The NYT points to the human side: employees resist tools they don’t trust. Forbes focuses on the technical side: most AI still can’t understand the context of work. Both are true, and they’re related. When AI lacks context, employees lose trust. It can’t tell the latest doc from last year’s draft. It summarizes a customer conversation but drops the follow-ups buried in the thread. It pulls a response from Slack while ignoring the context in Google Drive. Employees realize it creates more work than it saves, and stop using it. Pilots stall, deployments fade, and projects slide into the “trough of disillusionment" as the NYT describes. Unfortunately, that's the reality for many organizations. At Glean, we work hard to make sure AI understands the enterprise context the way a human does. If a subject matter expert says something, I trust it more. If something’s old, I double-check it. That’s how people think, and it’s how AI should work too. Yet every enterprise has its own documentation culture and quirks, so sometimes we struggle at first. But we persist and co-develop with customers until the system reaches the quality they need. Then we take those learnings to make it work automatically for the next customer. We’ve seen this approach deliver measurable impact for customers: • Booking.com: Glean Agents give teams faster access to customer insights, cutting video production time by 75% and doubling monthly output. • Confluent: Glean’s AI-powered search saves 15,000+ hours/month, boosts support satisfaction by 13%, and cuts ticket investigation time by 10 minutes. • Fortune 100 telecom company: Glean surfaces instant knowledge during support calls, reducing call resolution time by 17 seconds across 800+ agents. • Leading global consultancy: Glean Agents automate RFP workflows, cutting consulting project proposals from 4 weeks to a few hours (97% faster). • Wealthsimple: Glean gives employees instant access to policies and knowledge, driving $1M+ in annual productivity gains. When AI understands the real context of work—across people, tools, and workflows— employees trust it and use it. Instead of falling into the trough of disillusionment, companies climb a slope toward productivity gains and real ROI.
Improving Trust in Software Through Better Context
Explore top LinkedIn content from expert professionals.
Summary
Improving trust in software through better context means designing systems that understand the full situation around their actions and decisions, so users feel confident relying on them. By making software more transparent, accurate, and secure—especially in AI-driven environments—organizations reduce confusion and build stronger user trust.
- Explain your process: Give users clear, step-by-step insights into how decisions or answers are generated so they know why the software responds the way it does.
- Protect sensitive data: Secure software interactions by tracking identities, using encryption, and verifying every action to ensure that data is handled safely and access is tightly controlled.
- Log and audit actions: Keep records of every decision or step the software takes, making it easy to review how outcomes were reached and resolve questions or disputes quickly.
-
-
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our research of understanding how knowledge graphs increase the accuracy of LLM-powered question answering systems over 2 years ago! The intersection of knowledge graphs and large language models (LLMs) isn’t theoretical anymore. It's been a game-changer for enterprise question answering and now everyone is talking about it and many are doing it. 🚀 This new paper is a summary of our lessons learned of implementing this technology in data.world and working with customers, and outline the opportunities for future research contributions and where the industry needs to go (guess where the data.world AI Lab is focusing). Sneak peek and link in the comments Lessons Learned ✅ Knowledge engineering is essential but underutilized: Across organizations, it’s often sporadic and inconsistent, leading to assumptions and misalignment. It’s time to systematize this critical work. ✅ Explainability builds trust: Showing users exactly how an answer is derived, including auto-corrections, increases transparency and confidence. ✅ Governance matters: Aligning answers with an organization’s business glossary ensures consistency and clarity. ✅ Avoid “boiling the ocean”: don’t tackle too many questions at once A pay-as-you-go approach ensures meaningful progress without overwhelm. ✅ Testing matters: Non-deterministic systems like LLMs require new frameworks to test ambiguity and validate responses effectively. Where the Industry Needs to Go 🌟 Simplified knowledge engineering: Tools and methodologies must make this foundational work easier for everyone. 🌟 User-centric explainability: Different users have different needs so we need to focus on “explainable to whom?”. 🌟 Testing non-deterministic systems: The deterministic models of yesterday won’t cut it. We need innovative frameworks to ensure quality in LLMs powered software applications. 🌟 Small semantics vs. Larger semantics: The concept of semantics is being increasingly referenced in industry in the context of “semantic layers” for BI and Analytics. Let’s close the gap between the small semantics (fact/dimension modeling) and large semantics (ontologies, taxonomies) 🌟 Multi-agent systems: break down the problem into smaller, more manageable components. Should an agent deal with the core task of answering questions and managing ambiguity, or should these be split into separate agents? This research reflects our commitment to co-innovate with customers to solve real-world challenges in enterprise AI. 💬 What do you think? How are knowledge graphs shaping your AI strategies?
-
Conversational AI is transforming customer support, but making it reliable and scalable is a complex challenge. In a recent tech blog, Airbnb’s engineering team shares how they upgraded their Automation Platform to enhance the effectiveness of virtual agents while ensuring easier maintenance. The new Automation Platform V2 leverages the power of large language models (LLMs). However, recognizing the unpredictability of LLM outputs, the team designed the platform to harness LLMs in a more controlled manner. They focused on three key areas to achieve this: LLM workflows, context management, and guardrails. The first area, LLM workflows, ensures that AI-powered agents follow structured reasoning processes. Airbnb incorporates Chain of Thought, an AI agent framework that enables LLMs to reason through problems step by step. By embedding this structured approach into workflows, the system determines which tools to use and in what order, allowing the LLM to function as a reasoning engine within a managed execution environment. The second area, context management, ensures that the LLM has access to all relevant information needed to make informed decisions. To generate accurate and helpful responses, the system supplies the LLM with critical contextual details—such as past interactions, the customer’s inquiry intent, current trip information, and more. Finally, the guardrails framework acts as a safeguard, monitoring LLM interactions to ensure responses are helpful, relevant, and ethical. This framework is designed to prevent hallucinations, mitigate security risks like jailbreaks, and maintain response quality—ultimately improving trust and reliability in AI-driven support. By rethinking how automation is built and managed, Airbnb has created a more scalable and predictable Conversational AI system. Their approach highlights an important takeaway for companies integrating AI into customer support: AI performs best in a hybrid model—where structured frameworks guide and complement its capabilities. #MachineLearning #DataScience #LLM #Chatbots #AI #Automation #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gFjXBrPe
-
Why would your users distrust flawless systems? Recent data shows 40% of leaders identify explainability as a major GenAI adoption risk, yet only 17% are actually addressing it. This gap determines whether humans accept or override AI-driven insights. As founders building AI-powered solutions, we face a counterintuitive truth: technically superior models often deliver worse business outcomes because skeptical users simply ignore them. The most successful implementations reveal that interpretability isn't about exposing mathematical gradients—it's about delivering stakeholder-specific narratives that build confidence. Three practical strategies separate winning AI products from those gathering dust: 1️⃣ Progressive disclosure layers Different stakeholders need different explanations. Your dashboard should let users drill from plain-language assessments to increasingly technical evidence. 2️⃣ Simulatability tests Can your users predict what your system will do next in familiar scenarios? When users can anticipate AI behavior with >80% accuracy, trust metrics improve dramatically. Run regular "prediction exercises" with early users to identify where your system's logic feels alien. 3️⃣ Auditable memory systems Every autonomous step should log its chain-of-thought in domain language. These records serve multiple purposes: incident investigation, training data, and regulatory compliance. They become invaluable when problems occur, providing immediate visibility into decision paths. For early-stage companies, these trust-building mechanisms are more than luxuries. They accelerate adoption. When selling to enterprises or regulated industries, they're table stakes. The fastest-growing AI companies don't just build better algorithms - they build better trust interfaces. While resources may be constrained, embedding these principles early costs far less than retrofitting them after hitting an adoption ceiling. Small teams can implement "minimum viable trust" versions of these strategies with focused effort. Building AI products is fundamentally about creating trust interfaces, not just algorithmic performance. #startups #founders #growth #ai
-
One of the most interesting aspects of my last few roles, including my current work at Humain, is operating at the intersection of AI and advanced security/encryption techniques from zero-knowledge proof systems to the extension of Zero Trust principles into the agentic world. In traditional Zero Trust, we authenticate users and devices. In the agentic world, the “user” could be an autonomous agent — a system that reasons, acts, and interacts with data and other agents, often at machine speed. That changes everything. To secure this new ecosystem, Zero Trust must evolve from static identity verification to dynamic trust orchestration, where every action, decision, and data exchange is continuously verified, contextual, and cryptographically enforced. 1. Agent Identity and Attestation Every agent must have a verifiable, cryptographically signed identity and prove its integrity at runtime; not just who you are, but what you’re running: the model, weights, policy context, and data provenance. 2. Intent-Aware Policy Enforcement Access control must become intent-aware, so agents act only within bounded policy domains defined by explicit goals, permissions, and ethical constraints — continuously verified by embedded governance logic. 3. Least Privilege and Time-Bound Access Agents must operate under least privilege, with access granted only for the minimum scope and durationrequired. In fast-moving agentic environments, time-limited trust becomes an essential safeguard. 4. Assumed Breach and Blast Radius Containment We must assume some agents or environments will be compromised. Security design should minimise impact through microsegmentation, strict trust boundaries, and dynamic reassessment of communication between agents. 5. Encrypted Cognition As models process sensitive data, confidential AI becomes essential where combining homomorphic encryption, secure enclaves, and multi-party computation can ensure that the model cannot “see” the data it processes. Zero Trust now extends into the reasoning process itself. 6. Adaptive Trust Graphs Agents, services, and humans form dynamic trust graphs that evolve based on behaviour and context. Continuous telemetry and anomaly detection allow these graphs to adjust privileges in real time based on risk. 7. Cryptographic Provenance Every output, decision, summary, or recommendation must be traceable back to the data, model, and policy that produced it. Provenance becomes the new perimeter. 8. Autonomous Audit and Forensics Every action should be self-auditing, cryptographically signed, and non-repudiable forming the foundation for verifiable operations and compliance. 9. Machine-to-Machine Governance As agents begin to negotiate, transact, and collaborate, Zero Trust must extend into inter-agent diplomacy, embedding ethics, accountability, and policy directly into machine communication. If you’re working on AI security, agent governance, or confidential computation, I’d love to connect.
-
“Visibility without context is just data overload.” (Explanation with Case Study from GULF) Because knowing everything means nothing if you don’t know what to do with it. And in OT environments, information without relevance isn’t insight, it's interruption. Most OT tools show you everything except what actually matters to the plant manager, the engineer, or the vendor trying to finish the job without breaking the system. 📖 STORY: THE REFINERY MISALIGNMENT IN THE GULF We were working with a large industrial operation in the Gulf, a critical part of the region’s energy supply chain. The company ran multiple sites, from refining units to chemical plants, spread across remote areas with legacy systems and rotating field teams. Their IT leadership had just rolled out a sophisticated OT visibility and threat detection platform. They called it “total visibility.” The OT teams called it something else. Almost overnight, the SOC was flooded with thousands of alerts triggered by routine maintenance, remote vendor logins, and unmanaged legacy equipment that had been running safely for years. The alerts weren’t just overwhelming. They were unactionable. Field engineers didn’t know what to respond to. The SOC couldn’t tell which alerts truly mattered. Vendor tasks were delayed. Access requests were denied. Production timelines slipped. No breach. No attack. Just friction from tools that lacked context. 💡 INSIGHT Culture is what determines how people interpret urgency, ownership, and risk. And cybersecurity, especially in OT, isn’t just about controls. It’s about clarity across: 🧠 IT and OT 🧱 Engineering and security 🤝 Internal teams and external vendors When that alignment breaks, even the best tools break trust. Because it’s not how much you see. It’s how clearly you understand what to do with it. 🔄 SHIFT IN THINKING ❌ Don’t start with dashboards. ✅ Start with context. ❌ Don’t lead with policy. ✅ Lead with partnership. What secures OT environments isn’t just more data It’s purposeful visibility that respects uptime, safety, and operational flow. ✅ TAKEAWAYS 🔸 Tune your alerts to match operational reality, not just technical severity 🔸 Make risk language understandable across departments 🔸 Give OT teams the clarity they need to act not just react 🔸 Build trust between SOC, engineering, and vendors before crisis strikes 📩 CTA If you're leading cybersecurity in critical infrastructure or industrial operations and struggling with alert fatigue, misalignment, or tool rejection DM me. We’ll share the Context-First Visibility Framework we use to turn noise into action and finger-pointing into functional trust. 👇 Where have you seen too much visibility become the real vulnerability? #CyberLeadership #OTSecurity #VisibilityWithContext #OperationalClarity #ITOT #SecurityCulture
-
LLMs are stateless. They wake up dumb and forgetful every single turn. All the intelligence you think you’re seeing? It’s assembled on the fly by whatever context you feed them. That’s what Google’s new whitepaper calls Context Engineering: dynamically assembling system instructions, history, tools, and long-term memory so an agent can reason like it’s alive instead of starting from zero. Here’s what that shift actually means: 1. Sessions are the new runtime. Every conversation becomes a container, a log of events, tool calls, and working memory. Treat it like a scratchpad, not a database. Compact aggressively. Summarize relentlessly. 2. Memory is the new database. It’s not the chat history; it’s the extracted signal. A structured layer that remembers meaning, not tokens. RAG makes your agent an expert on facts. Memory makes it an expert on you. 3. The architecture flips. Context isn’t just a prompt anymore. It’s an orchestrated payload: user profile, history, retrieved facts, and session state all stitched together per turn. Every request becomes a small act of real-time data engineering. 4. Asynchronous pipelines are mandatory. Memory extraction and consolidation must run in the background. Blocking memory writes kill responsiveness. 5. Trust is an engineering problem. Every memory needs provenance: who said it, when, and how trustworthy it is. Without that, your personalized AI becomes a confident liar with a long-term memory. This is the invisible layer that separates chatbots from true digital colleagues. Models are commodities. Context is strategy. Enterprises that master context engineering will own the interface between human and machine cognition. Everyone else will just be renting predictions.
-
Gartner just published a framework mapping which software categories will structurally win or lose in the AI execution era. The x-axis is AI decision authority. The y-axis is control of enterprise context, defined as authoritative identity, policy, system-of-record access, and audit. The danger zone is low context, high authority. This means systems that execute on behalf of enterprises without actually knowing the enterprise. This is the same argument we've been making, just from the other direction. Most AI vendors compete on the authority axis. More autonomy, more actions, more agents doing more things. The assumption is that capability is the constraint. What Gartner is pointing out is that “context is the real constraint.” You can give an AI all the authority in the world; if it doesn't know who's asking, what they're allowed to see, and how they relate to the rest of the organization, it's operating on incomplete information. Yet that's the default state of most enterprise AI deployments today. When we built Atolio, we started with the context layer. This includes: 1. Unified identity across systems 2. Permissions enforced at query time 3. A collaboration graph that truly understands relevance Together, those three things determine whether an AI system knows enough about your organization to be trusted with it The search experience is what people see. The context layer is why it works – or it doesn’t.
-
You're in a Principal GenAI Engineer interview at Microsoft, and the interviewer asks: "Our production RAG system has a 200K token context window. Why is it still failing on complex queries?" Here's how you can answer: A. Most candidates say "bigger context = better performance." Dead wrong. B. There are 4 critical context hygiene failures that kill even GPT-5. 𝟭. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗗𝗶𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 - 𝗧𝗵𝗲 𝗽𝗮𝘀𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝗮 𝗽𝗿𝗶𝘀𝗼𝗻 The agent becomes BURDENED by too much history. What happens: Tool outputs from 50 interactions ago still clogging context Past summaries pile up like digital hoarding Agent over-relies on repeating past behavior instead of reasoning fresh 𝟮. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗖𝗼𝗻𝗳𝘂𝘀𝗶𝗼𝗻 - 𝗧𝗵𝗲 𝘁𝗼𝗼𝗹 𝗱𝘂𝗺𝗽 𝗱𝗶𝘀𝗮𝘀𝘁𝗲𝗿 Irrelevant tools or documents CROWD the context. What happens: System prompt includes 40 tool descriptions Agent gets distracted by weather_api when user asks about payment processing Wrong tool selection rates spike to 30%+ Production nightmare: Trading bot has access to news_search, sentiment_analysis, stock_price, portfolio_manager, risk_calculator, order_executor. User asks: "What's the current price of AAPL?" Agent calls order_executor instead of stock_price. Why? Tool descriptions competing for attention in crowded context. Solution: Dynamic Tool Selection: Filter and load ONLY relevant tools per query Tool Routing Agent: Dedicated agent pre-selects applicable tools before main reasoning Quality Validation: Check whether retrieved information is actually useful 𝟯. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗖𝗹𝗮𝘀𝗵 - 𝗖𝗼𝗻𝘁𝗿𝗮𝗱𝗶𝗰𝘁𝗼𝗿𝘆 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗽𝗮𝗿𝗮𝗹𝘆𝘀𝗶𝘀 Conflicting information within context MISLEADS the agent. What happens: Document A says "Feature X launches Q1 2024" Document B says "Feature X delayed to Q3 2024" Agent gets stuck between conflicting assumptions Source Attribution: Track which chunk came from where Temporal Awareness: Weight recent information higher 𝟰. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗼𝗶𝘀𝗼𝗻𝗶𝗻𝗴 - 𝗧𝗵𝗲 𝗰𝗼𝗺𝗽𝗼𝘂𝗻𝗱𝗶𝗻𝗴 𝗲𝗿𝗿𝗼𝗿 𝗰𝗮𝘁𝗮𝘀𝘁𝗿𝗼𝗽𝗵𝗲 Incorrect or hallucinated information ENTERS the context. What happens: Agent hallucinates "User's API key is abc123" (wrong) Stores this in memory REUSES this wrong key in 47 subsequent interactions Each failure reinforces the bad data Human-in-the-Loop: Critical decisions require confirmation Self-Correction: Agent periodically validates its own stored memories Fact-Checking Layer: Cross-reference with authoritative sources 𝗪𝗵𝗲𝗻 𝗲𝗮𝗰𝗵 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝘄𝗶𝗻𝘀: ✅ Context Summarization: Conversational agents with long history ✅ Context Pruning: Agentic systems that accumulate tool outputs ✅ Dynamic Tool Selection: Multi-tool environments (10+ tools) ✅ Quality Validation: Mission-critical applications (finance, healthcare) ✅ Conflict Resolution: Multi-source RAG systems ✅ Fact-Checking: Agents that store and reuse information
-
Breaking the “Curse of Knowledge” in Software Teams — with the Help of AI One of the silent killers of collaboration in software development is the “Curse of Knowledge.” It happens when someone forgets what it’s like not to know something. The more experienced we become, the harder it gets to see the world from a beginner’s perspective. 💻 How it shows up in teams: - Senior managers and engineers give high-level feedback that juniors can’t act on. - Documentation assumes everyone knows the same internal jargon. - Code reviews skip over “obvious” steps that aren’t obvious to new members. - Conversations move too fast for some, while others quietly stop asking questions. ⚙️ Impact on the team: - Knowledge gaps widen. - Onboarding slows down. - Psychological safety drops — people hesitate to admit confusion. - Collaboration turns into parallel work rather than shared problem-solving. 🧠 Impact on individuals: - Juniors feel lost or underconfident. - Senior managers and engineers get frustrated explaining “basics.” - Communication friction rises, productivity falls. ✅ What can we do about it (as humans): - Adopt a beginner’s mindset. When explaining, imagine you’re talking to your past self from 3 years ago. - Encourage questions. Celebrate curiosity, not just correctness. - Write empathetic documentation. Assume the reader is smart but new. - Bridge context gaps deliberately. In code reviews or discussions, explain why, not just what. - Pair often. Nothing breaks the curse like seeing how others think. 🤖 And here’s where AI tools can now help: - The new wave of AI agents and copilots are quietly becoming context bridges inside engineering teams. - AI Pair Programmers (like GitHub Copilot or Cody) can explain code logic in plain English, giving juniors confidence to explore independently. - AI Review Agents can flag unclear code or missing rationale, prompting seniors to communicate intent instead of just syntax. - AI Documentation Assistants can auto-generate and update docs that adapt to different skill levels or roles — making knowledge accessible, not tribal. - AI Context Agents can surface related decisions, architecture notes, and previous discussions — preventing repeated “why did we do this?” moments. Together, these tools reduce friction, democratize context, and help teams think out loud — at scale. 💬 The takeaway: Breaking the Curse of Knowledge isn’t just about empathy anymore — it’s about using AI to amplify empathy. When we pair human intent with intelligent tools, we don’t just build better teams — we build better understanding.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development