Shining a Light on Knowledge

Shining a Light on Knowledge

We are still learning about the potential and limitations of LLM’s, Vectors and associated architectures. As we learn more about such capabilities it is becoming clear that to build systems that are robust, we need to add specific capabilities to an LLM architecture.  Specifically, capabilities such as RAG and Graph RAG.  These support the optimisation and to some extent precision of LLM queries. As architects we should always ask the question: “What architecture and technology options do we have to integrate LLMs into our Enterprise Capabilities?”

Let’s examine some of the possible options we have and see what we can learn from such an exercise.

Legacy Architecture + LLM ‘s with enhancements

We can summarise a selection of possible options:

1.      Retrieval Augmented Generation

Adding a specific data source to the LLM query to provide a ”trusted” source for elements of the LLM query. This could be local, or internet hosted. This provides data currency or precision for the LLM query.

Article content

 

2.      Graph RAG

 Aimed at higher accuracy and more complete answers using LLMs. Here we add an additional knowledge store which contains a knowledge graph built from some existing data set.

Graphic structure and semantics critical, it’s only a view!

·       How to extract the appropriate information

·       How to structure the knowledge graph to ensure correct semantics is represented. Ontological view perhaps.


Article content

Agent Architectures

Agentic AI gives us one possible architecture showing how to exploit LLM capabilities.

1.      Agentic AI

Adds the ability to manage a level of process flow and proactive behaviour to an LLM architecture. Typically, a “Chain of Thought” based process where the LLM is the cognitive system driving the decision-making process.

Article content

2.      Heterogeneous Agent Architectures

Agentic AI is just one possible agent archetype viz a viz a broader collection of possible heterogeneous agent architectures. 

Individual agents are “specialists” for a given domain and work as part of a distributed network and work together to achieve a set of overall goals for a given set of domains.

Agent to agent communication is typically asynchronous and agent specific information is held in a Knowledge base and distributed through a communications hub/access layer.  The communication flows require some thought, though standards are now emerging – Google Agent2Agent Protocol.

Article content

Observation

In either set of architypes, a “local” or domain specific knowledge base must exist,  the purpose of which is typically to provide context, domain knowledge and precision.

So, what about designing and building our “local” information sources?

In his article “The Einstein of LLMs” John Werner, see https://www.forbes.com/sites/johnwerner/2025/03/10/the-einstein-of-llms/ , gave a great description of the potentially fundamental limitations of LLMs and the future development of AI. 

This leads to how the question “how we can use extended AI Knowledge models to build the local knowledge models we require?”.

Knowledge Representation

So, the question “How best to build a knowledge model”, is not a new question, but one that been extensively researched over the years in the AI literature.  Any technique we use must lead to the development of a semantic model that provides a base for AI development as well as a supporting a route from legacy systems to a new knowledge-based structure.  Let’s take a brief look at one promising option:

Conceptual Spaces

( See Conceptual Spaces, the geometry of thought, P Gardenfors, 2004 )

"There are currently two dominating approaches to the problem of modelling representations. The symbolic approach starts with the assumption that cognitive systems can be described as Turing machines. The second approach id associationism, where associations among different information elements carry the burden of representation. Connectionism is a special case of associationism, that models associations using artificial neural networks. They both have their advantages and disadvantages and should really be seen as complementary approaches.

Conceptual spaces introduce a, geometric form that attempts provide a more natural way of representing information for cognitive modelling. It complements the afore mentioned approaches as part of a multi-layer approach that represents cognition as three levels of representation, with different scales of resolution.”  P Gardenfors 2004 

Principles

Supports Properties, Hierarchies and Concepts/Domains, with a basis in cognitive modelling.

·       Presents a geometric underpinning and the possibility of consistency and geometric topologies as a content quality check/visualisation.      

·       Similarity through vectors, as we see with LLM’s.

·       Provides bridge between legacy and AI – through modelling concepts/domains in legacy architectures, or the use of industry standard models e.g. BIAN 

Thus the  Knowledge Repository will have multiple layers to provide support for this “scales of resolution” principle:

             

Article content

                                                          

We can also model a world where multiple “truths” can exist (by the domain structure in Concept Spaces), truth is also not absolute and many things we believe to be true may eventually be proven to be false, or support the presence of multiple beliefs:

Article content

Knowledge is Dynamic

Knowledge is a dynamic and continually changing entity.  Typically, its consensus that defines the truth or otherwise of concepts.  This is a crucial property of any knowledge model (perhaps a measure of the certainty of concepts or the concept context)

 

Knowledge Repository Architecture

All based on the requirement that we have automatic transformation in and out of the view(s) from the core conceptual model.  It’s this principle that allows us to create a useable, maintainable repository for now and the future.

 

Article content

 

Thus, we could in effect access the same underlying semantic data through:

Integration Options

·       As a vector database

·       As a graph

·       A broader knowledge base (APIs, events etc.) in support of agent-based architectures e.g. AgenticAI.

 

Reasoning

Typically, with an LLM we can employ an inductive reasoning approach, adding in the symbolic model supports a level of deductive reasoning.

Or perhaps add to the concept model the requisite support for an agent architecture such as BDI with its means end reasoning e.g.  Plans, intents and beliefs for agentic. Which is a better fit for end user applications, and customer centric models supporting say 1-1 marketing capabilities.

 Summary

LLM’s have shown what can be achieved using AI, but just focusing on such technologies at the expense of the broader range of options we already have, risks us not being able to fully exploit the full potential of AI.  A correctly built model suitable for one ore more domains will increase precision and robustness of the AI tooling an organisation might deploy.

 

 


To view or add a comment, sign in

More articles by David Hunt

  • KNOWLEDGE MODELLING IN MODELAI

    ModelAI, learning from experience Currently our main use case for the knowledge repository is to act as "local" source…

    1 Comment
  • The case for ModelAI (part2)

    Why is ModelAI based on Conceptual Spaces ? Today we see people exploring neurosymbolic LLMs: 1) Use an LLM to…

  • Furthering the Case for ModelAI (Part 1)

    In setting the theme for this article I am indebted to the late great Douglas Adams and Deepthought. “If GenAI is the…

  • ModelAI: building alternative architectures for the next generation of AI systems.

    Throughout the development of IT systems we have strived to balance the functional needs of the business with the need…

    1 Comment
  • The truth, the whole truth and nothing but the truth

    Knowledge quality is something that we should strive for when employing AI based technology whether we are using an…

    5 Comments
  • AI: are we going in the right direction?

    As part of our company research we are looking at the development of AI tooling around knowledge repositories and…

  • A Perfect Storm, part two.

    As architects we are, once again, at a crossroads with respect to our overall Enterprise Architecture models (the last…

    1 Comment
  • Giving Agent based AI the personal Touch

    I posted recently about the need for a knowledge repository as part of any enterprise AI solution, let’s see how this…

    3 Comments
  • Agent Systems, new or old?

    The arrival of Generative AI and associated capabilities such as Vector Embedding has contributed to the increased…

  • Information and Knowledge Architectures

    Data has always been critical to the delivery of business capability for an organisation, now more than ever in a…

Others also viewed

Explore content categories