Autonomous Rational an Agents:
In many cases, it is inaccurate to talk about a single program or a single robot, as the multi-purpose and multi-tasking system of hardware and software in some intelligent systems is considerably more complicated. Instead, we'll follow the rule-regulation of Russell and Norving and describe "AI" through the autonomous, rational intelligent an agents paradigm. We're going to use the definitions from chapter 2 of Russell and Norvig's textbook, starting with these two:
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.
A rational an agent is one that does the right thing.
We see that the word 'an agent' covers humans (where the sensors are the senses and the effectors are the physical body parts) as well as robots (where the sensors are things like cameras and touch pads and the effectors are various motors) and personal computers (where the sensors are the keyboard and mouse and the effectors are the monitor and speakers).
To verify whether an agent has acted rationally, we require an objective measure of how successful it has been and we need to worry about when to make an evaluation using this measure. When designing an agent, it is important to think hard about how to evaluate its performance, and this evaluation should be independent from any internal measures that the an agent undertakes (for example as part of a heuristic search - see the next lecture). The performance should be measured in terms of how rationally the program acted, which depends not only on how well it did at a particular assignment but also on what the an agent experienced from its environment, what the an agent knew about its surrounding areas and what reactions the an agent could actually assume.