From the course: Agentic AI Human-Agent Collaboration Design Patterns

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Knowledge limit reporting

Knowledge limit reporting

Depending on how they are used and where they are deployed, agents will tend to, sooner or later, need to deal with situations that fall outside of their training or capabilities. In these moments, the agent's LLM might try to be helpful by guessing. However, as we also know, that can lead to hallucinations, which is not something we want. Let's have a look at an example. Let's say that an agent is designed to scan vendor contracts to ensure they align with a company's standard legal requirements, like for indemnity or privacy. The agent relies on a database of approved legal language and current regulatory requirements. So the agent is reviewing a contract that contains a brand new type of sovereign data, let's say. It's a clause that resulted from a new law that was just recently passed, because this legal language is entirely new, it doesn't exist in the agent's RAG data or its internal training. In other words, the agent is encountering an unfamiliar situation. On its own, the…

Contents