15 people sent me the same article in the last 24 hours, OpenAI's announcement of how they built their own internal in-house data agent. Why does everyone think I need to see this? Beyond just being interesting, it validates something I've been saying for years: The model isn't the hard part. Context is. When we started talking about the idea of context being king for AI at Atlan, people would sometimes respond with blank stares: "Why are you building a context platform? Just plug in GPT." Finally, I can send them this article from OpenAI as a response. As they put it, "CONTEXT IS EVERYTHING. High-quality answers depend on rich, accurate context. Without context, even strong models can produce wrong results, such as vastly misestimating user counts or misinterpreting internal terminology. To avoid these failure modes, the agent is built around multiple layers of context that ground it in OpenAI’s data and institutional knowledge." To make their data agent successful, OpenAI needed to unify lots of different types of context from different sources, both within and beyond their data platform. They call it "multilayered contextual grounding." Here's what that means: → Table usage: Going beyond table names to understand how data flows and gets used (e.g. table schemas, relationships, lineage, usage patterns, and historical queries) → Human annotations: Pulling from domain-expert knowledge for each table that goes beyond metadata (e.g. semantics, business meaning, and known caveats) → Codex enrichment: Examining the code behind each data table to understand insights like scope and granularity, which can highlight important differences between tables that look similar on the surface → Institutional knowledge: Pulling context from Slack, Google Docs, and Notion to understand company specifics (e.g. launches, reliability incidents, internal codenames, key metrics) → Memory: Saving and learning from prior user corrections and agent discoveries over time via saved, editable memories → Runtime context: Live queries to the data warehouse or other data platform systems when context is missing or stale Can't wait for the next time someone tells me that context is easy. I'll just send them this article! Great work by Bonnie Xu, Aravind Suresh and Emma Tang.
The Importance of Context in Data Analysis
Explore top LinkedIn content from expert professionals.
Summary
Understanding the importance of context in data analysis means recognizing that data alone doesn't tell the whole story—context provides the background and circumstances that explain why numbers look the way they do. Without context, data can be misinterpreted, leading to flawed conclusions and decisions.
- Ask deeper questions: Always investigate what factors might have influenced the data, such as timing, external events, or underlying causes.
- Include human insights: Pair data with knowledge from domain experts and real-world experience to add meaning and relevance.
- Challenge assumptions: Regularly review what may be missing from your data and question if implicit biases or gaps exist in the information you collect.
-
-
Context: The Human Element Data and AI Might be Missing We often define context as "the surrounding discourse that explains meaning." In the realm of Data and AI, this definition can matter significantly. Raw data is abundant, varied, and stored—but is the "why" present? Does it inherently provide the narrative needed for true understanding? The answer is often no. What problems can this present? Similarly, while AI is easily consumed and prompted, its responses—without adequate context—risk being technically correct but profoundly wrong. Its given intelligence because it is linguistically right but can mask a fundamental lack of background knowledge. Without context, what problems can happen with AI? AI is a mirror of humanity and its data. AI is a reflection of us. It is powerfully computed for us, yet it lacks the lived experience and critical thinking we possess. It doesn't necessarily own the "why" behind the response it is given. A danger may lie in anthropomorphizing the technology and turning over our essential human traits. We are the ones who supply the ideas, creativity, and critical thinking to the data and the models. The call to action is clear: Do not forego your human traits. Own and bring the context. Bring the storytelling, the narrative, and the rigorous questions to every data set and every AI reply. Data and AI are our partners, not our replacements. Their value is maximized when partnered with a human, augmenting the human and allowing the human to be, well, more human. Bring yourself and the context. Stay nerdy, my friends. #ArtificialIntelligence #DataScience #HumanCentricAI #Innovation
-
Most MIS dashboards proudly display 𝗞𝗣𝗜𝘀, 𝗴𝗿𝗮𝗽𝗵𝘀, 𝗮𝗻𝗱 𝘁𝗿𝗲𝗻𝗱𝘀. They’re 𝘴𝘭𝘦𝘦𝘬, 𝘥𝘢𝘵𝘢-𝘳𝘪𝘤𝘩, and look 𝘪𝘮𝘱𝘳𝘦𝘴𝘴𝘪𝘷𝘦 in boardrooms. But here’s the uncomfortable truth 👉 𝗗𝗮𝘁𝗮 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗶𝘀 𝗷𝘂𝘀𝘁 𝗻𝗼𝗶𝘀𝗲. Imagine this: 📈 You notice a 20% jump in weekly 𝘴𝘢𝘭𝘦𝘴. Applause all around? But what if that spike was due to a one-off promo campaign? Or a competitor store shutting down temporarily? Or maybe... it’s just Diwali week? 𝗠𝗜𝗦 𝗿𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 𝘀𝗵𝗼𝘂𝗹𝗱𝗻’𝘁 𝘀𝘁𝗼𝗽 𝗮𝘁 “𝘄𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱.” The real power lies in 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗱𝗼𝘁𝘀 :- 𝘞𝘩𝘺 𝘥𝘪𝘥 𝘪𝘵 𝘩𝘢𝘱𝘱𝘦𝘯? 𝘞𝘩𝘢𝘵 𝘦𝘹𝘵𝘦𝘳𝘯𝘢𝘭/𝘪𝘯𝘵𝘦𝘳𝘯𝘢𝘭 𝘵𝘳𝘪𝘨𝘨𝘦𝘳𝘴 𝘱𝘭𝘢𝘺𝘦𝘥 𝘢 𝘳𝘰𝘭𝘦? 𝘞𝘩𝘢𝘵 𝘥𝘰𝘦𝘴 𝘪𝘵 𝘮𝘦𝘢𝘯 𝘧𝘰𝘳 𝘧𝘶𝘵𝘶𝘳𝘦 𝘱𝘭𝘢𝘯𝘯𝘪𝘯𝘨? Too many decisions get made on raw trends without narratives. 𝗔𝗻𝗱 𝘁𝗵𝗮𝘁’𝘀 𝗿𝗶𝘀𝗸𝘆 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀. 💡 Good reports 𝗶𝗻𝗳𝗼𝗿𝗺. 💡 Great reports 𝗲𝘅𝗽𝗹𝗮𝗶𝗻. 💡 Powerful reports 𝗶𝗻𝗳𝗹𝘂𝗲𝗻𝗰𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀 by combining data with 𝘤𝘰𝘯𝘵𝘦𝘹𝘵, 𝘵𝘪𝘮𝘪𝘯𝘨, 𝘢𝘯𝘥 𝘳𝘦𝘭𝘦𝘷𝘢𝘯𝘤𝘦. 📣 𝙃𝙤𝙬 𝙙𝙤 𝙮𝙤𝙪 𝙚𝙣𝙨𝙪𝙧𝙚 𝙮𝙤𝙪𝙧 𝙧𝙚𝙥𝙤𝙧𝙩𝙨 𝙨𝙥𝙚𝙖𝙠 𝙩𝙝𝙚 𝙛𝙪𝙡𝙡 𝙨𝙩𝙤𝙧𝙮? 𝘿𝙤 𝙮𝙤𝙪 𝙡𝙖𝙮𝙚𝙧 𝙞𝙣 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙘𝙤𝙣𝙩𝙚𝙭𝙩, 𝙤𝙧 𝙡𝙚𝙩 𝙩𝙝𝙚 𝙣𝙪𝙢𝙗𝙚𝙧𝙨 𝙨𝙥𝙚𝙖𝙠 𝙖𝙡𝙤𝙣𝙚? #DataWithContext #DataDrivenDecisionMaking #ExcelDashboards #DataAnalytics
-
I've been seeing a lot of discussion lately about product managers questioning the absolute authority of data. This resonates with me, because while data is undoubtedly crucial for making informed decisions, it's vital to understand the context behind those data points. Blindly following data without context can be misleading and drive us to making flawed decisions. A sudden drop in user engagement, for example, could have various explanations beyond just the product itself. External factors, UI changes, or even bugs could be at play. For me, context is the key while making decisions that stem from data. This means digging deeper and asking questions like: -What specific question are we trying to answer with this data? -How was the data collected, and are there any potential biases? -What are the limitations of this data set? -What are the broader trends and external factors at play? The idea is to make decisions that are not just informed by numbers, but also grounded in the real world.
-
Last week at one of my AI workshops, someone asked me, “Are there ways to make AI bias-free with the kind of data we have?” And, of course, I gave a long, nerdy-like-comfortable-in-the-rabbit-hole response... But here is one key idea from that piece I want to share: we need to move away from the “data is data" mindset. Raise your hand if you have said this at least once and heard it more than once -“Data is data.” That three-word statement states formally that our data is a neutral entity, based on objective truth. Like those numbers on a chart/ screen/ excel sheet are the absolutes. And that is the problem—data is never just data. In using that statement as a fact, we risk to miss that ● a number without context is just a statistic that can be used (or misused) however we choose. ● a survey result without ‘why’ can reinforce the wrong assumptions. ● a trend in AI-generated insights is only as unbiased as the dataset it was trained on. If we take data at face value, we risk replicating the same biases, blind spots, and systemic inequities that got us here in the first place. I am not saying we do not trust what our data says. (I am a data scientist at heart, after all.) I am asking you to do the homework of building that “trust” into what our data says. (I am also a life coach, you see.) So, how do we go beyond these “absolutes”? Here are some examples: ● Add context to numbers Example: If your data tells you 60% of your donors are over 55. Ask why? What about your outreach, messaging, or history has led to this? Is anything missing from the picture? ● Ask ‘why?’ more than once Example: Data says program participation is dropping. Instead of stopping at “engagement is low,” again, ask: Why? Is it accessibility? Timing? Lack of awareness? ● Identify what is missing Example: Data only shows what was collected. Who wasn’t surveyed? What wasn’t asked? What assumptions were baked into the process of this historical data collection? ● Act on what data you collect Example: Evaluate and assess if there is alignment in your data collection and your actions. Are you asking for something and not acting on what you hear? Or are you not asking questions that can bring back negative reactions or feedback? If your data constantly confirms the status quo, challenge it. And when (not “if”) it reveals something uncomfortable, lean in. To go beyond the “data is data” mindset, we need to go beyond collecting data. Our work, then, is in listening to it, questioning it, making it available to all related and involved in the data, and yes, acting on it. #nonprofits #nonprofitleadership #community
-
Without context it's just numbers Raw numbers only tell part of the story. To truly understand performance, context is key. Take an athlete’s total distance: 10km. Is that impressive? Average? An outlier? Without context, it’s impossible to know. That’s where tools like averages, percentiles, and standard deviations come in. For example: 1). Averages show the baseline. If the team average is 8km, a 10km distance is above average. 2). Percentiles reveal ranking. If an athlete is in the 90th percentile, they outperform 90% of their peers. 3).Standard deviations measure consistency. A player with wild fluctuations in training load might struggle with performance stability. By adding context, raw data can turn into actionable insights that drive smarter decisions.
-
There’s a long-running joke in Star Trek that if you wore a red shirt, you were doomed. But is it really true or just sci-fi folklore? When you start looking at the data, the story becomes far more interesting. Yes, red-shirted crew members did die more often.... 73% of fatalities compared to just 13.7% of the crew overall. On the surface, it looks like the uniform was a death sentence. But as with most analytics, the first number rarely tells the whole story. When you segment the data, you notice patterns: Context matters. Most of those deaths didn’t happen on the ship. They happened on away missions. Crew members in red weren’t cursed—they were more likely to be security and engineering staff sent into danger in the first place. Events skew results. Nearly half of all red-shirt deaths happened in clusters—missions where multiple crew members died at once. Outliers and extraordinary events can distort the overall averages. Human behavior changes the outcome. Here’s the surprising twist: when Captain Kirk had a romantic involvement with a local alien, the odds of survival for red shirts jumped by 84%. His choices influenced the storyline, which in turn shifted the data. That’s the essence of analytics. Numbers in isolation can be misleading. Without segmentation, context, and behavioral insights, you end up chasing myths instead of truth. The lesson translates perfectly into marketing: Looking only at traffic totals can hide the fact that most of your conversions come from a specific channel or campaign. A spike in leads may look great, until you realize it came from a single event or an unqualified audience. Customer behavior, motivations, and context can dramatically change how data should be interpreted. Analytics isn’t just about counting what happened. It’s about uncovering why it happened. In Star Trek, it meant understanding the red shirt myth. In business, it means avoiding costly mistakes and building strategies that actually reflect reality. That’s the difference between raw data and real insight. https://lnkd.in/gJJhdAv4
-
Context Isn’t Optional! AI is amazing, yes, but AI without context is just guessing with confidence. I’ve seen companies spend millions on systems that were technically correct—yet completely wrong in reality. One healthcare trusted an AI diagnosis blindly. The result? A near-miss that could have caused irreversible harm . The data wasn’t wrong. The interpretation was. The difference? Experience. The kind you don’t get from a dashboard. The kind you can’t shortcut with a prompt. The kind that comes from years of pattern recognition, mistakes, and lessons learned in the real world. That’s why our job as leaders isn’t just to adopt technology—it’s to prepare our workforce to work with it. Because without context, we risk making stupid mistakes at scale—the kind of blunders that could have been avoided with one person in the room saying, “That doesn’t look right.” Two Recommendations for Every Leader: 1. Institutionalize critical thinking. Make it a KPI to challenge and validate AI outputs before they’re acted upon. 2. Pair AI with domain expertise. Don’t let tech run unsupervised in areas where one wrong decision can create legal, financial, or reputational damage. Because AI without context is automation without direction. And automation without direction doesn’t make you faster—it just gets you lost sooner. Let’s not outsource our thinking. Let’s value experience, let’s prepare our workforce so that when AI gets it wrong, we still get it right. (READ The title CAREFULLY)
-
Foundation Capital's piece on Context Graphs as AI's next trillion-dollar opportunity is making the rounds. For good reason. For those who’ve been following my posts, the thesis will feel familiar. But the article offers a fresh and timely articulation of why this moment matters now. What really resonated - both for me and in the follow-up commentary - is how clearly it identifies the missing layer in the enterprise data stack. Today, we tend to think about enterprise data in two places: 🔹 inside systems of record 🔹 aggregated downstream into data warehouses There is almost nothing in between. But real decisions are never made in a single system. They are made by stitching together signals from CRM, finance, operations, support systems, policy documents, Slack threads - often with human judgement applied at the seams. A simple example: An agent proposes a 20% discount. Policy caps it at 10% unless an exception is approved. The agent pulls service incidents, an open escalation, and a prior approval thread. Finance signs off. The CRM records one fact: “20% discount”. The reasoning trace - the why - disappears. Once the decision is made, the connective tissue is gone. And that’s the key insight: ⚡the most valuable data for enterprise AI is not just what happened, but how and why a decision was reached.⚡ This mirrors exactly what foundation model companies discovered when they started building reasoning models. Performance didn’t improve just by scaling data - it improved when they began collecting reasoning traces. In the enterprise, that trace is the human decision process itself. The article’s framing is spot on. Systems of record and data warehouses excel at storing facts, events, and transactions. What they don’t capture is the context that explains them - the exceptions, precedents, constraints, and trade-offs that shaped a particular outcome. That context is where meaning lives. And yes - that is semantics in action. What’s been most interesting is how widely this idea resonates: 🔹 builders see it as the architectural gap holding back agentic AI 🔹 strategists see it as the next durable data moat 🔹 practitioners recognise it as tacit knowledge that currently lives only in people’s heads If we can persist decision context as a first-class asset, we don’t just get better AI. We get explainability, auditability, organisational memory, and safer automation. The authors are right. This is a trillion-dollar market. Here's what they didn't say: the technology to build this already exists. Open standards. Battle-tested. Mostly free. Ready now. The gap isn't technical. It's awareness. ⭕ Foundation Capital Article: https://lnkd.in/e2mUrznM ⭕ KG: https://lnkd.in/ercbKedw 🔗 Build Your Own Semantics: https://lnkd.in/ezHU2amU
-
𝗙𝗼𝘂𝗿 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀. 𝗦𝗮𝗺𝗲 𝘀𝘁𝗮𝘁𝘀. 𝗪𝗶𝗹𝗱𝗹𝘆 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝘀𝘁𝗼𝗿𝗶𝗲𝘀. Visual inspection is 𝘯𝘰𝘵 optional. Anscombe's Quartet is a classic reminder of why plots matter: Each of the four datasets has: 👉The same mean for X and Y 👉The same variance for X and Y 👉The same correlation between X and Y 👉The same linear regression line But when you plot them? 🚨Completely different shapes: ✅A linear relationship ✅A clear curve ✅An outlier dominating the trend ✅A vertical line with a single influential point Same stats. Different stories. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝘄𝗼𝗿𝗹𝗱: 👉KPIs may hide anomalies 👉Descriptive stats can misinterpret patterns 👉Decision-makers might rely on misleading summaries What looks like a tidy trend could actually be noise. Or worse: a trap. In data science, context is everything. And 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 is often the fastest way to: ✅Spot errors ✅Identify outliers ✅Understand relationships Before trusting any model, always ask: 𝗛𝗮𝘃𝗲 𝘄𝗲 𝘀𝗲𝗲𝗻 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮? 🎯 Plot first. Analyze second. Let's make this a norm: No summary statistics without visual context... ... especially in low-dimensional data. Curious to hear from others: Have you ever been fooled by stats that looked perfect on paper but broke down when you visualized them? Drop your favorite example below. #statistics #datascience #dataviz #analytics
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development