KNOWLEDGE MODELLING IN MODELAI
ModelAI, learning from experience
Currently our main use case for the knowledge repository is to act as "local" source of knowledge for Agent based systems (to support archetypes like RAG and Graph RAG) and to do this it provide a conceptual/semantic model of aspects of an organisations business or elements of the environment that surrounds it. The goal is to provide precision where that's hard to deliver in a given LLM i.e. reduce the impact of "Hallucinations" through better knowledge quality and observability with other benefits such as support for risk/regulatory reporting (support the build of Context Graphs).
Using Conceptual Spaces as a base provides a good foundation for the development of the knowledge Repository which can be used in delivering ModelAI solutions and supporting multiple types of reasoning/search etc. I have already touched on how they can provide short- and long-term memory solutions for an Agentic AI.
So as an example we have developed a Conceptual Space for a large part of the BIAN conceptual model extended to include an OWL ontology that supports reasoning in Financial Services and a separate Enterprise Architecture Conceptual Space based on ISO/IEE 42010 to validate the basic architecture. We still have automated generation of downstream products from the conceptual model. To achieve the aims of say observability and reasoning engines we make explicit elements of an AI model that typically are in the LLM but not directly accessible. In fact current research suggest that:
Conceptual spaces:
LLMs:
So, whilst LLMs may implement conceptual spaces without explicit axes which is powerful, but less interpretable, especially given one of our aims is observability. So we still believe that making elements of the model explicit is a valid approach and that starting the process with a conceptual model is a perfect fit. It was in developing some of the larger models we use (BIAN for example) that scaling Conceptual Spaces/ModelAI highlighted some interesting gaps in how to develop AI Models. .
To summarize what we have learned:
So, how can we develop solutions to these issues:
Abstraction
A region in a Conceptual Space is:
A well-designed system doesn’t just have regions—it has layers of regions; just like an object hierarchy. Each level:
Context
Context is a transformation on a conceptual space that selects, reshapes, and interprets regions. More concretely:
So Context doesn’t just choose a region—it reshapes the space so that the region makes sense. So, incorporating Context and Abstraction reshapes the emerging Knowledge repository architecture and this now has 4 core layers:
Here, items are represented as points or distributions in a shared vector space.
This layer stores:
the geometric substrate.
2. Region formation layer
This builds regions over the raw space.
Mechanisms could include:
concepts like “dog,” “tool,” or “danger” emerge as usable groupings. We build this from "classic" Conceptual Modelling.
Recommended by LinkedIn
3. Abstraction hierarchy
This organizes regions into multiple levels, still representing Concepts.
For example:
This layer manages complexity by allowing the system to reason at:
4. Context and control layer
This decides which abstraction level to use right now.
For instance:
This is crucial, because fixed abstraction is too rigid.
It solves the main complexity problems:
We are currently extending/building this architecture for several prototype Conceptual Space implementations including extending our BIAN model and an EA Ontology. This will allow further validation of the emerging architecture. But AI developments arrive quickly:
New research directions, where the thinking is taking us.
1.Categories
A category is a structured region in a conceptual space, whose shape, boundaries, and interpretation depend on similarity, context, and task. It would be good to have a more formal definition of any transforms/interpretations we can apply to such a region.
More explicitly:
Our Question: Whether or not using Category Theory helps in developing formal reasoning models that exploit Categories.
2.Expectations
Incorporating the new work on reasoning and Expectations (see the Gardenfors/Osta-Velez paper on reasoning with concepts) Reasoning with Concepts
Outline; Context is not just about active concepts, It is:
example: If context = “home + pet situation” , then expectations look like:
High probability:
- Dog → Pet
- Pet → HumanBond
Low probability:
- Dog → Predator
- Dog → Insult
Also as a way of developing new concepts from existing concepts, perhaps as a way of building new learning capabilities. Once we understand a bit more about how these work in our knowledge repository we will integration the code into the current Knowledge Repository codebase.
Hi touched on an important aspect Hallucinations. But they are for real and impact the businesses where accuracy is of utmost importance. Seen too much of similar data being provided for discovery aspects impact the tools being discovered thus resulting in issues at time. Important aspects should touch on couple of things which I feel -domain/business knowledge. Knowledge graphs are important for that. -prompts the way they are written . - Rag which you have touched - security aspects covering gaurdrails etc.