From the course: Agentic AI Human-Agent Collaboration Design Patterns
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Agent role classification
From the course: Agentic AI Human-Agent Collaboration Design Patterns
Agent role classification
A common issue when designing agentic AI systems is creating agents that sometimes overreach. What we mean by this is that the agent is capable of doing something that we don't want it to do or that we don't want it to do under certain circumstances. Let's say a stock trading firm uses a stock trading agent that was built to be able to buy and sell stocks based on how it was reading the stock market. One day it notices a specific stock price rapidly decline. The agent reacts quickly to the situation by rapidly selling off all the shares that one of the firm's clients has in the stock. However, the decrease in the stock price was only temporary. It occurred because of what's called a flash crash where there's a technical glitch that causes temporary price fluctuations. In this case, the stock price rebounded only minutes later. A human analyst later notices this and determines that the agent made a big mistake. It acted based on reflexive logic instead of reflecting on the decision…