Agents & Environments

Agents & Environments

An agent is anything that perceives its environment through sensors, processes the information, and takes actions through actuators (or effectors) to influence the environment. 

A software agent receives input through file contents, network traffic, or user interactions (such as keyboard, mouse, touchscreen, or voice commands). It responds by writing files, sending network messages, or displaying information on the screen or through audio output.

Here is an AI agent architecture with inceptors (user input, APIs) and actuators (database, local files, output actions)

AI agent with actuators and sensors
AI Agent with actuators and sensors

Agent Function

An agent function is a mathematical and abstract description that maps any sequence of percepts (inputs) received by an agent to a specific action. It defines what action the agent should take in response to its environment. You can think of the agent function as a large lookup table that lists the appropriate action for every possible input sequence. For example, in the case of an intelligent vacuum cleaner:

If the agent perceives “dust detected,” the function maps this input to the action “start cleaning.”

Article content
Agent Functions

Agent Program

An agent program is the actual implementation of the agent function. It's a software code that runs on the agent's hardware and determines what action the agent should take based on the sequence of percepts. While agent function is an abstract mapping of percepts to actions, the agent program is the executable code that realises this behaviour, whether through conditions, rules, algorithms or learned models. 

Performance Measure

A performance measure is a numeric or evaluative standard that defines how successful an agent is in a given environment. 

Single Agent Environment

A single-agent environment, is consists of a single intelligent agent to solve a problem, without the help of any other agent or in opposition to other agents. An example would be an intelligent agent solving a crossword puzzle where the agent takes input from the environment (crossword box) and solves it based on percepts; there is no other agent involved in this. 

Multi-Agent Environment

A multi-agent environment is one where more than one agent operates, either in collaboration or competition, to achieve individual or shared goals. These agents can be human or artificial and act intelligently within the same environment, for example, two AI agents playing chess or two self-driving cars navigating traffic.

Even if an AI agent is playing against a human in chess, it’s still a multi-agent environment, because the human is making intelligent decisions.

Whether something is treated as an agent or just an object depends on its behaviour:

  • If entity B (like another car) acts intelligently, adapts to its surroundings, and makes decisions, agent A must treat it as an agent.
  • But if entity B behaves randomly or passively, like a driverless car just moving without intelligence (hypothetical), then A can treat it as an object, like waves or falling leaves, simply obeying physics.

Article content
Multi-agent Environment

A multi-agent environment is further divided into two categories

  • Competitive Multiagent
  • Cooperative Multiagent

Competitive Multiagent

In chess, AI agent A is trying to maximise its performance measure to win, which, by the rules of chess, minimises agent B's (the other agent) performance measure. Thus, chess is a competitive multi-agent environment. 

Article content
Competitive multi-agent

Cooperative Multiagent

On the other hand, in a taxi-driving environment, avoiding collision maximises the performance measure of all agents driving cars, so it's a partially co-operative multi-agent environment where all agents work together to maximise performance measures. It is also partially competitive because, for example, only one car can occupy a parking space.

Article content
Cooperative multi-agent environment

Rational Behaviour Communication

In a multi-agent environment, communication sometimes becomes rational behaviour, for example, imagine two self-driving cars face each other and one signals to the other to give way. This communication then becomes essential. 

Random Behaviour Communication

In a multi-agent environment, at some time, AI agent communication becomes random to maximise the performance measure. For example, two agents playing rock, paper and scissors and one agent with random communication will maximise their chances to win.

Deterministic Environment

If the next state of the environment is completely determined by the current state and the action executed by an agent,  then the environment is deterministic. An example would be two agents playing chess; then, the next move can be determined by the current state of the board and the agent's move due to the fixed rules of chess.

Non-Deterministic Environment

The next state of the environment is unknown, regardless of the current state and the action of the agent; then it's called a non-deterministic environment. For example, when an agent is driving a car and applies the brakes, the action of the brakes is non-deterministic because maybe the tyres can burst, or when stopping at a signal, another car unexpectedly comes.

Article content
Non-deterministic environment

Stochastic Environment

A stochastic environment is similar to a non-deterministic environment with a key difference that in a stochastic environment, the next state of the environment is non-deterministic with some probability.  For example, an AI agent broadcasting weather will inform that there is a 25% chance that tomorrow it will rain.

Article content
Deterministic vs Stochastic

Episodic Environment

An episodic environment that an agent experiences is divided into small atomic episodes, in which each episode the agent perceives an environment and takes a single action. An action from an episode does not affect the next episode, nor does the current episode affect by previous one. An example could be an AI agent in an assembly line testing defective parts of a machine. This test consists of a sequence of episodes to check parts of the machine, and the result of one episode is independent of the other episodes. 

Article content
AI agent Episodic testing

Sequential Environment

A sequential environment is one in which an agent's current action also affects its future steps and results. The agent needs to make a decision more intelligently and carefully to maximise performance measures. An example can be an AI agent playing a chess game, and its current move decision will also depend on its future results or an agent driving a car. 

Static Environment

An environment which does not change until the agent is engaging or deliberating is considered static. An example will be a crossword puzzle board, which does not change until solved.   

Dynamic Environment

An environment which continuously changing until the agent is engaging or deliberating, then it is considered dynamic. An example will be an automatic AI car running on the road.

Article content
AI Agent driving a car

Semi-dynamic Environment

In a semi-dynamic environment, the environment does not change, but with the passage of time, it affects the score or performance measure of an agent. An example will be if we add time to a chess game, and that could affect the performance of the agent.

Article content
AI playing check with timer

Discrete vs Continuous Environment

Al environment can further be classified as discrete or continuous, and that classification depends upon those three factors,

  • State of the environment.
  • Time flows.
  • Percepts and actions of the agent.

The state of the environment is the condition of the environment at a certain time, and it is the information on which an agent decides what action needs to be taken. A discrete environment has a finite number of states, time and actions, and everything is step-by-step countable. In this environment agent moves to the next step after completing the current move. The example is chess chessboard where each position on chess chessboard is fixed and each agent's move is step-by-step. All actions of an agent are countable and limited.

A continuous environment has continuous states in real-time. In a continuous environment, actions and inputs have smooth ranges (instead of fixed rules).  An example would be an AI self-driven car where speed, location and steering angle represent the above three factors.

Known vs unknown Environment

An environment is considered known if the agent or the agent's designer is well aware of that environment, like the laws of physics or what will be the result of a certain action, whether it is deterministic or non-deterministic. If an agent is unaware of the outcome of an action and unaware of the environment where it is placed, then in case the agent requires learning for that new environment and it will be categorised as an unknown environment.

Conclusion

Through our exploration of intelligent agents, we’ve seen how their behaviour is shaped by the nature of their environment—whether it’s fully observable or partially hidden, deterministic or unpredictable, single-agent or multi-agent. Designing effective AI agents requires understanding how they perceive, decide, and act within these conditions. Real-world environments like taxi driving or chess highlight the importance of adaptability, collaboration, and learning. As AI continues to evolve, building agents that can navigate complex, dynamic settings intelligently is key to creating impactful and responsible solutions.

To view or add a comment, sign in

Others also viewed

Explore content categories