Rational performance of an agent - artificial intelligence:
To summarize, an agent takes input from its violence and affects that environment. The rational performance of an agent must be assessed in terms of the task it was meant to assume, its awareness and knowledge of the environment and the actions it was actually able to undertake. This performance should be objectively measured independently of any internal measures used by an agent. In English language, the word 'autonomy' means an ability to govern one's actions independently. In this situation, we need to specify the extent to which an agent's behavior is affected by its surroundings, we can say that:
-The autonomy of an agent is measured by the extent to which its behaviour is determined by its own experience.
At a level, an agent might never pay any kind of attention to the input from its background in which case, it's reactions are determined entirely by its built-in knowledge. At the other level, if an agent does not initially act using its important information and knowledge, it will have to act randomly, which is not wanted. Hence, it is desirable to take a balance between whole autonomy and no autonomy. Thinking of hum agents, we are born with certain reflexes which govern our actions to begin with. However, through our ability to learn from our atmosphere we begin to react more autonomously as a result of our experiences in the world. Imagine a baby learning to crawl about, it must use in-built information to enable it to correctly employ its arms, legs and fingers; otherwise it would just thrash around. More ever, as it moves, and bumps into things, it learns to stay away from objects in the environment. When we leave home, we are supposed to be fully autonomous agents ourselves. We just have expected similar of the agents we build for "AI" tasks: their autonomy increases in line with their experience of the surroundings.