How Graph-Based Approaches Improve Data Management

Explore top LinkedIn content from expert professionals.

Summary

Graph-based approaches use networks of connected data points—called graphs—to organize, retrieve, and reason with information, making it easier for AI systems and humans to understand complex relationships. By structuring data as interconnected entities and links, these methods help manage vast data sets and support smarter, more context-aware searches.

  • Build connected structures: Organize your data into graphs that map out relationships between key entities, so you can quickly trace facts and answer questions that require multiple steps.
  • Use hybrid search tools: Combine graph querying with traditional searches to uncover both direct matches and semantically relevant connections, ensuring you find diverse and meaningful information.
  • Streamline context: Summarize and retrieve only the most relevant parts of a graph for your queries, reducing information overload and improving the accuracy and clarity of answers.
Summarized by AI based on LinkedIn member posts
  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    721,353 followers

    Vector search gave LLMs memory. Graph databases gave LLMs relationships. But neither could give LLMs real-time reasoning. That’s the next frontier. Because agents don't just need content — they need connected knowledge that they can reason over, instantly. And here’s where the traditional stack fails: Most graph databases still “walk” through data — one node, one edge, one hop at a time. Exactly like humans flipping pages in a directory. That works for analytics. It collapses for AI agents. The core idea: What if graphs stopped behaving like “maps”… and started behaving like “math”? That’s the FalkorDB breakthrough. Instead of hopping from node to node — FalkorDB converts the entire graph into a sparse matrix. Your data becomes a mathematical object. And once your graph is math — queries become math too. Not traversal. Not step-by-step. Just matrix computation using linear algebra. And math doesn’t walk. It computes. Which means: Real-time graph reasoning for agents. At scale. Why this changes the game for LLMs: Vector search tells you what is similar. Graphs tell you what is connected. But sparse matrix graphs tell you what is structurally meaningful — instantly. It’s the difference between finding a document… …and finding the truth inside a network of relationships. That's how agents will think. FalkorDB brings this into the real world: 🔹 Graphs as sparse matrices — zero traversal overhead 🔹 Linear algebra-powered queries — orders-of-magnitude faster 🔹 Redis-native, open-source, lightweight deployment 🔹 OpenCypher compatible — no need to learn a new language 🔹 Built specifically for LLM context, agent memory, and reasoning I tested it — queries that took seconds now feel like function calls. Agents that relied on retrieval now reason in real-time. The future isn't LLMs with bigger context windows. It’s LLMs with smarter knowledge structures. And frameworks like FalkorDB will power that shift. I’ve shared their GitHub link in the comments — explore it, run it, stress it. It feels like where Agent Memory is heading.

  • View profile for Kuldeep Singh Sidhu

    Senior Data Scientist @ Walmart | BITS Pilani

    16,041 followers

    Rethinking Vector Search: Beyond Nearest Neighbors with Semantic Compression and Graph-Augmented Retrieval Traditional vector databases rely on approximate nearest neighbor (ANN) search to retrieve the top-k closest vectors to a query. While effective for local relevance, this approach often yields semantically redundant results-missing the diversity and contextual richness required by modern AI applications like RAG systems and multi-hop QA. The Problem with Proximity-Based Retrieval: Current ANN methods prioritize geometric distance but don't explicitly account for semantic diversity or coverage. This leads to retrieval results clustered in a single dense region, often missing semantically related but spatially distant content. Enter Semantic Compression: Researchers from Carnegie Mellon University, Stanford University, Boston University, and LinkedIn have introduced a new retrieval paradigm that selects compact, representative vector sets capturing broader semantic structure. The approach formalizes retrieval as a submodular optimization problem, balancing coverage (how well selected vectors represent the semantic space) with diversity (promoting selection of semantically distinct items). Graph-Augmented Vector Retrieval: The paper proposes overlaying semantic graphs atop vector spaces using kNN connections, clustering relationships, or knowledge-based links. This enables multi-hop, context-aware search through techniques like Personalized PageRank, allowing discovery of semantically diverse but non-local results. How It Works Under the Hood: The system operates in two stages: first, standard ANN retrieval generates candidates, then a greedy optimization algorithm selects the final subset. For graph-augmented retrieval, relevance scores propagate through both vector similarity and graph connectivity using hybrid scoring that combines geometric proximity with graph-based influence. Real Impact: Experiments show graph-based methods with dense symbolic connections significantly outperform pure ANN retrieval in semantic diversity while maintaining high relevance. This addresses critical limitations in applications requiring broad semantic coverage rather than just local similarity. This work represents a fundamental shift toward meaning-centric vector search systems, emphasizing hybrid indexing and structured semantic retrieval for next-generation AI applications.

  • View profile for Vin Vashishta
    Vin Vashishta Vin Vashishta is an Influencer

    AI Strategist | Monetizing Data & AI For The Global 2K Since 2012 | 3X Founder | Best-Selling Author

    209,733 followers

    What’s the point of a massive context window if using over 5% of it causes the model to melt down? Bigger windows are great for demos. They crumble in production. When we stuff prompts with pages of maybe-relevant text and hope for the best, we pay in three ways: 1️⃣ Quality: attention gets diluted, and the model hedges, contradicts, or hallucinates. 2️⃣ Latency & cost: every extra token slows you down, and costs rise rapidly. 3️⃣ Governance: no provenance, no trust, no way to debug and resolve issues. A better approach is a knowledge graph + GraphRAG pipeline that feeds the model the most relevant data with context instead of all the things it might need with no top-level organization. ✅ How it works at a high level: Model your world: extract entities (people, products, accounts, APIs) and typed relationships (owns, depends on, complies with) from docs, code, tickets, CRM, and wikis. GraphRAG retrieval: traverse the graph to pull a minimal subgraph with facts, paths, and citations, directly tied to the question. Compact context, rich signal: summarize those nodes and edges with provenance, then prompt. The model reasons over structure instead of slogging through sludge. Closed loop: capture new facts from interactions and update the graph so the system gets sharper over time. ✅ A 30-day path to validate it for your use cases: Week 1: define a lightweight ontology for 10–15 core entities/relations built around a high-value workflow. Week 2: build extractors (rules + LLMs) and load into a graph store. Week 3: wire GraphRAG (graph traversal → summarization → prompt). Week 4: run head-to-head tasks against your current RAG; compare accuracy, tokens, latency, and provenance coverage. Large context windows drive cool headlines and demos. Knowledge graphs + GraphRAG work in production, even for customer-facing use cases.

  • View profile for Tony Seale

    The Knowledge Graph Guy

    41,087 followers

    The Microsoft GraphRAG library has recently garnered significant attention, so I took a closer look at the paper and code to share a high-level summary with the community. At its core, GraphRAG combines data indexing and querying within a Knowledge Graph + LLM framework. Data is organized into hierarchical communities using a clustering algorithm, and queries leverage this structure to deliver more relevant responses to the LLM. 🔵 Data Indexing: The system uses DataShaper Workflows to transform documents into structured tables (I was slightly disappointed that something more graph-native like JSON-LD wasn't used here). The LLM processes the entire dataset, extracting entities, relationships, and covariates (facts), which are then structured into a Knowledge Graph. These entities are grouped into communities using the Hierarchical Leiden Algorithm, summarized with an LLM, and embedded for efficient querying. 🔵 Data Querying: GraphRAG supports two query types: 🔹 Local Search: This bottom-up approach maps user queries to entities using the text embeddings generated from node descriptions. Relationships are identified, and communities are expanded based on the density of their connections. The most connected entities are ranked higher. 🔹 Global Search: This top-down method uses map-reduce to concurrently process the most important points from each community and aggregate them into a final response. 🔵 Final Thoughts: Microsoft has done an impressive job with community detection and summarization. I was surprised not to see a semantically enriched HNSW that could leverage the hierarchy of community levels, but perhaps this was deemed too resource-intensive. The lack of ontological depth is also notable—LLMs are doing most of the heavy lifting without much human intervention in mapping domain knowledge. Nonetheless, MS GraphRAG is a powerful tool that could pave the way for more structured, AI-driven data querying. It will be interesting to see how this evolves as more organizations embrace Knowledge Graphs and ontologies in their LLM implementations. I hope you found this summary useful, and I’m keen to hear your thoughts and comments. ⭕ Code: https://lnkd.in/eiM3_S4t ⭕ Paper: https://lnkd.in/eMW_Ai-K ⭕ HNSW: https://lnkd.in/eH7JqEyZ ⭕ Distributed HNSW: https://lnkd.in/et3DTN2w

  • View profile for Vignesh Kumar
    Vignesh Kumar Vignesh Kumar is an Influencer

    AI Product & Engineering | Start-up Mentor & Advisor | TEDx & Keynote Speaker | LinkedIn Top Voice ’24 | Building AI Community Pair.AI | Director - Orange Business, Cisco, VMware | Cloud - SaaS & IaaS | kumarvignesh.com

    21,051 followers

    🚀 Why RAG alone won’t get us there—and how Agentic RAG helps I've used RAG systems in multiple products—especially in knowledge-heavy contexts. They help LLMs stay grounded by retrieving supporting documents. But there’s a point where they stop being useful. Let me give you a simple example. Let’s say you ask: 👉 “Which medical researchers have published on long COVID, what clinical trials they were part of, and what other conditions those trials studied?” A classical RAG system would: 1️⃣ Look for text chunks that match “long COVID” 2️⃣ Return some papers or abstracts 3️⃣ And leave the LLM to guess or hallucinate the rest And here is the problem? You're not just looking for one passage. You're asking for a chain of connected facts: 🔹 Authors → 🔹 Publications → 🔹 Clinical trials → 🔹 Other conditions RAG systems were never built to follow that trail. They do top-k lookup and feed static chunks to the LLM. No planning. No reasoning. No ability to explore relationships between entities. That’s where Agentic RAG with Knowledge Graphs comes in. Instead of dumping search results, the system: ✅ Breaks the question into steps ✅ Uses structured data to navigate relationships (e.g., author–trial–condition) ✅ Assembles the answer using small, verifiable hops ✅ Uses tools for hybrid search, graph queries, and concept mapping You can think of it like this: A classical RAG is like searching through a pile of papers with a highlighter and Agentic RAG is like giving the job to a smart analyst who understands the question, walks through your research database, and explains how each part connects. I am attaching a paper I read recently that demonstrated this well—they used a mix of Neo4j for knowledge graphs, vector stores for retrieval, and a lightweight LLM to orchestrate the steps. The key wasn’t the model size—it was the structure and reasoning behind it. I believe that this approach is far more suitable for domains where: 💠 Information lives across connected sources 💠 You need traceability 💠 And you can’t afford vague or partial answers I see this as a practical next step for research, healthcare, compliance, and enterprise decision-support. #AI #LLM #AgenticRAG #KnowledgeGraph #productthinking #structureddata I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence   PS: All views are personal Vignesh Kumar

  • View profile for Anthony Alcaraz

    GTM Agentic Engineering @AWS | Author of Agentic Graph RAG (O’Reilly) | Business Angel

    46,808 followers

    Agentic systems don't just benefit from Small Language Models. They architecturally require them, paired with knowledge graphs. Here's the technical reality most teams miss. 🎯 The Workload Mismatch Agents execute 60-80% repetitive tasks: intent classification, parameter extraction, tool coordination. These need <100ms latency at millions of daily requests. Physics doesn't negotiate. Model size determines speed. But agents still need complex reasoning capability. 🧠 The Graph Solution The breakthrough: separate knowledge storage from reasoning capability. LLMs store facts in parameters. Inefficient. Graph-augmented SLMs externalize knowledge to structured triples (entity-relationship-entity), use 3-7B parameters purely for reasoning. Knowledge Graph of Thoughts: Same SLM solves 2x more tasks when querying graphs vs. processing raw text. Cost drops from $187 to $5 per task. Multi-hop reasoning becomes graph traversal, not token generation. Token consumption drops 18-30%. Hallucination reduces through fact grounding. 💰 The Economics At 1B requests/year: GPT-5 approach: $190K+ 7B SLM + graph infrastructure: $1.5-19K One production system: $13M annual savings, 80%→94% coverage by caching knowledge as graph operations. ⚡ The Threshold Below 3B parameters: Models can't formulate effective graph queries Above 3B: Models excel at coordinating retrieval and synthesis over structured knowledge Modern 7B models (Qwen2.5, DeepSeek-R1-Distill, Phi-3) now outperform 30-70B models from 2023 on graph-based reasoning benchmarks. 🏗️ The Correct Architecture Production agents converge on this pattern: Query → Classifier SLM → Graph construction/update → Specialist SLMs query graph → Multi-hop traversal → Response synthesis → (5% escalate to LLM) The graph provides: External memory across reasoning steps Fact grounding to prevent hallucination Reasoning scaffold for complex inference 🔐 Why This Matters Edge deployment: 5GB graph + 7B model runs locally on laptops Privacy: Medical/financial data never leaves premises Latency: Graph queries are deterministic <50ms operations Updates: Modify graph triples without model retraining Real case: Clinical diagnostic agent on physician laptop. Patient symptoms → graph traversal → diagnosis in 80ms. Zero external transmission. 🎓 The Separation of Concerns Graphs handle: relationship queries, continuous updates, auditability SLMs handle: query formulation, reasoning coordination, synthesis LLMs conflate both functions in one monolith. This drives their size and cost. Agent tasks follow this pattern: understand intent → retrieve structured knowledge → reason over relationships → execute action → update knowledge state. Graphs make each step explicit. SLMs provide coordination intelligence. Together, they outperform larger models on unstructured data at 10-36x lower cost. Are you still processing agent tasks with 70B+ models on raw text, or have you separated knowledge (graphs) from reasoning (SLMs)?

  • View profile for Raphaël MANSUY

    Data Engineering | DataScience | AI & Innovation | Author | Follow me for deep dives on AI & data-engineering

    34,003 followers

    Introducing Microsoft Graph RAG: Enhancing AI's Ability to Summarize Large Text Corpora ... 👉A New Approach to Query-Focused Summarization based on Knowledge Graph Microsoft Researchers have introduced a novel approach called Graph RAG (Retrieval-Augmented Generation) that enhances the capabilities of large language models (LLMs) to answer complex questions over extensive text corpora. This innovative method combines the strengths of RAG systems and query-focused summarization (QFS) to handle "global questions" that require understanding entire datasets, such as identifying main themes within a large collection of documents. 👉 Overcoming Limitations of Traditional Methods Graph RAG addresses the limitations of traditional RAG systems, which struggle with questions that are not about retrieving specific information but rather summarizing broad concepts. It does so by creating a graph-based text index using an LLM, which then allows for the generation of comprehensive and diverse answers by summarizing information from closely-related entities within the graph. Here's a step-by-step example of how Graph RAG works: 1. The LLM builds a graph-based text index in two stages:   - Derive an entity knowledge graph from the source documents   - Pre-generate community summaries for groups of closely-related entities 2. Given a question, each community summary is used to generate a partial response 3. All partial responses are then summarized into a final response to the user 👉 Practical Applications Across Industries The implications of this research are significant for industries that rely on the analysis of large volumes of text data, such as: - Legal: Summarizing case law and identifying relevant precedents - Academic research: Synthesizing scientific papers and identifying research trends - Intelligence analysis: Extracting insights from diverse sources of intelligence data The ability to quickly summarize and extract themes from large datasets can greatly enhance decision-making processes and strategic planning in these fields and beyond. 👉 Benefits of Graph RAG Graph RAG offers several key benefits compared to traditional methods: - Scalability: Can handle much larger datasets than current LLMs - Efficiency: Generates comprehensive answers faster by leveraging graph structure - Diversity: Produces more diverse answers by considering multiple perspectives from related entities 👉 Shaping the Future of AI in Text Analysis As AI continues to advance, approaches like Graph RAG will play an increasingly important role in transforming data analysis across various industries. By enabling more efficient and effective summarization of large text corpora, Graph RAG and similar systems have the potential to unlock new insights and drive innovation in fields ranging from business to healthcare to scientific research.

  • View profile for Sohrab Rahimi

    Director, AI/ML Lead @ Google

    23,614 followers

    Many companies have started experimenting with simple RAG systems, probably as their first use case, to test the effectiveness of generative AI in extracting knowledge from unstructured data like PDFs, text files, and PowerPoint files. If you've used basic RAG architectures with tools like LlamaIndex or LangChain, you might have already encountered three key problems: 𝟭. 𝗜𝗻𝗮𝗱𝗲𝗾𝘂𝗮𝘁𝗲 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: Existing metrics fail to catch subtle errors like unsupported claims or hallucinations, making it hard to accurately assess and enhance system performance. 𝟮. 𝗗𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁𝘆 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀: Standard RAG methods often struggle to find and combine information from multiple sources effectively, leading to slower responses and less relevant results. 𝟯. 𝗦𝘁𝗿𝘂𝗴𝗴𝗹𝗶𝗻𝗴 𝘁𝗼 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗮𝗻𝗱 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻𝘀: Basic RAG approaches often miss the deeper relationships between information pieces, resulting in incomplete or inaccurate answers that don't fully meet user needs. In this post I will introduce three useful papers to address these gaps: 𝟭. 𝗥𝗔𝗚𝗖𝗵𝗲𝗸𝗲𝗿: introduces a new framework for evaluating RAG systems with a focus on fine-grained, claim-level metrics. It proposes a comprehensive set of metrics: claim-level precision, recall, and F1 score to measure the correctness and completeness of responses; claim recall and context precision to evaluate the effectiveness of the retriever; and faithfulness, noise sensitivity, hallucination rate, self-knowledge reliance, and context utilization to diagnose the generator's performance. Consider using these metrics to help identify errors, enhance accuracy, and reduce hallucinations in generated outputs. 𝟮. 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁𝗥𝗔𝗚: It uses a labeler and filter mechanism to identify and retain only the most relevant parts of retrieved information, reducing the need for repeated large language model calls. This iterative approach refines search queries efficiently, lowering latency and costs while maintaining high accuracy for complex, multi-hop questions. 𝟯. 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚: By leveraging structured data from knowledge graphs, GraphRAG methods enhance the retrieval process, capturing complex relationships and dependencies between entities that traditional text-based retrieval methods often miss. This approach enables the generation of more precise and context-aware content, making it particularly valuable for applications in domains that require a deep understanding of interconnected data, such as scientific research, legal documentation, and complex question answering. For example, in tasks such as query-focused summarization, GraphRAG demonstrates substantial gains by effectively leveraging graph structures to capture local and global relationships within documents. It's encouraging to see how quickly gaps are identified and improvements are made in the GenAI world.

  • AI Without Context Fails. Graphs Provide the Missing Piece. Large Language Models (LLMs) are powerful but not perfect. They often miss the mark when handling domain-specific data. 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗚𝗿𝗮𝗽𝗵𝘀) fixes this by merging LLMs with knowledge graphs–adding context, structure, and trust. It’s the secret to building AI that delivers meaningful, actionable insights. How so? Instead of relying only on text chunk searches, it uses graph queries to pull relevant, connected data. This creates smarter AI by tapping into relationships between entities. For example: ❓𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗔𝗻𝘀𝘄𝗲𝗿𝗶𝗻𝗴: GraphRAG goes beyond keyword searches. It understands relationships between products, components, and customers to offer personalized solutions, not just answers. 📖 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗗𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆: Uncover patterns and trends hidden in data. Competitive analysis, risk detection, and market insights become easier with graphs at the core. 🗣️𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜: Every result comes with an audit trail, fostering trust and transparency. 👕𝗣𝗲𝗿𝘀𝗼𝗻𝗮𝗹𝗶𝘇𝗲𝗱 𝗥𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀: Deliver hyper-personal recommendations by linking products, customer behavior, and browsing history. This goes way beyond “similar items” in a product catalog. 🔎𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗦𝗲𝗮𝗿𝗰𝗵: GraphRAG refines queries based on past actions and preferences, ensuring relevant results every time. Building knowledge graphs takes expertise and ongoing maintenance. There are a variety of LLM-based tools making this easier, but human oversight is still essential. Quality data is also critical. Inconsistent or messy data weakens GraphRAG’s effectiveness, just like any other AI or data science project. GraphRAG is reshaping how we use AI in real-world scenarios. It merges the strengths of LLMs with the structure of graphs for better accuracy, context, and transparency. 💬 Have you explored using GraphRAG to enhance your GenIA project? Share your experience in the comments. ♻️ Know someone struggling with a GenAI project? Share this post to help them out. 🔔 Follow me, Daniel Bukowski, for daily insights about building with connected data.

  • View profile for Juan Sequeda

    Principal Data Strategist & Researcher at ServiceNow (data.world acq); co-host of Catalog & Cocktails the honest, no-bs, non-salesy data podcast. 20 years working in Knowledge Graphs & Ontologies (way before it was cool)

    20,510 followers

    Lesson 17: Separate Knowledge from Data Knowledge is the metadata, semantics, context plane. It consists of business glossary, definitions, domain models, schemas, taxonomy, ontologies, policy, business rules. It’s relatively smaller than the data plane and what drives governance. It makes sense to manage it as a knowledge graph (Lesson 16) Data are the facts. Consists of databases, data lakes, text, files, streams. The data plane, is larger, distributed, heterogeneous, and usually optimized for storage/performance How do they connect? Through shared Identifiers (Lesson 14) and Mappings (Lesson 15) Where is the boundary? One person’s data is another person’s metadata. You are starting to treat knowledge as a first class citizen, when you realize that reference data should be part of the knowledge plane. Maybe not all of it from the beginning (don’t boil the ocean, Lesson 6), at least things that should be reused (Lesson 9): units, diseases, etc.   When you are building and designing your knowledge graph, a key principle here is optionality. Keep data where it is and don’t force everything into one storage system. Move data depending on use case. One approach is virtualization. When the knowledge graph is access, the mappings are used to rewrite the semantic query into a query for the source system (i.e. SQL). That was the core of my PhD (SPARQL to SQL translation). Another approach is materialization. The mappings are used to do ETL (the mappings are the T), translate to a graph and load it into a graph database. Useful for highly connected data, graph analytics use cases.  In reality, most organizations will do both. I’m see several pragmatic patterns emerge (don’t be pedantic Lesson 5) First, use identifiers that represent concepts, attributes, relationships in the ontology, or reference data and connect them to tables/columns schema through tags. e.g. if you have a column with label “temperature” it also as a tag that points to “Degree Celsius” qudt:DEG_C Second, inject identifiers into the data. Don’t just store the string “diabetes". Store the identifier for that concept. Extend the table to have the identifier and the label. e.g store “Diabetes mellitus type 1 stage 1” and its identifier snomed:721111000124107 This eliminates ambiguity and turns strings → things. Third, is what I call “knowledge tables”. The ontology is managed and governed centrally and drives the creation of the physical data schema. For example, each table represents a concept. Data is stored in 3NF in other words the Bill Inmon Model. Instead of a physical knowledge “graph”, you have a physical knowledge “table”. Works well for relational-first teams building data lakes (silver layer). This is what I mean by separating the knowledge and data. Store the data in whatever way is needed. Just make sure it’s connected with the knowledge. Final takeaway: - Separate concerns - Manage knowledge separately. Connect it to data flexibly - Meet users where they are

Explore categories