Multi-agent systems consist of representatives and the environments where they operate. Agent environments can be classified along with numerous traits, but the most cited is most likely the classification presented by Russell and Norvig. They arrange the environments according to the following properties:
inaccessible – if it’s possible to assemble full and complete information regarding the environment right now, then the environment is available. Normally, only virtual environments can be available, because, in fact, all sensors give an input that’s biased and incomplete up to some extent. non-deterministic – if an action performed in the environment causes a certain effect, the environment is deterministic.
Definite effect means that any actions of this agent contribute to the intended and anticipated results and there’s absolutely not any room for uncertainty. Of course, if the environment is inaccessible to the agent, it’ll be likely non-deterministic, at least from its viewpoint. Turn-based games are an example of a standard deterministic environment, whereas an area with a thermostat (in which the thermostat is the representative ) is a good instance of the non-deterministic environment since the activity of the thermostat doesn’t necessarily lead to the reversal of temperature (if, for example, a window is open).
dynamic – the environment is static once the agent is the sole thing that changes the environment right now. If it changes throughout the agent’s action (i.e., the condition of the environment is determined by time), it’s dynamic. Again, often real environments are dynamic (e.g., traffic in a town ) and some artificial environments are inactive (consider turn-based games such as chess again).
continuous – this is determined by whether several potential activities in the environment are finite or infinite. If the agent just has a specific set of possible actions it can do in the present time, then the environment is different. Otherwise, once the agent has an infinite number of alternatives, the environment is constant.
The broker can place a bet on a particular, limited number of gambling areas. On the other hand, the legal system is an ongoing environment. People have an unlimited number of choices how to, by way of instance, close deals or defend themselves before a court. the non-episodic – episodic environment is the environment where the agent operates in certain sections (episodes) which are independent of one another.
The agent’s state in 1 episode has no effect on its state in a different one. Human life is present in a non-episodic environment because most of our past experiences affect our behavior in the future. An operating system, on the other hand, is an episodic environment, as we could reinstall it. Then programs-agents can be set up on a”clean system” without a mix with the exact programs installed on the older system.
We can differentiate the environments also based on their spatial characteristics. It can be particularly useful in the case of agent-based versions: dimensionless – if spatial characteristics are important factors of the environment and the agent believes distance in its decision making, then the environment is dimensional.
If the agents don’t take space into consideration, then the environment is dimensionless. Real environments are usually dimensional, as we naturally feel and count with spatial qualities of our surroundings. By way of instance, as stock markets are nearly fully electronic now, it doesn’t matter where someone is present physically because they can buy or sell stocks on any In this kind of environment, spatial characteristics don’t have any influence on the agents’ decision making, and for that reason it’s dimensionless.