Example of the horizon problem:
It is also worth bearing in mind the horizon problem, however a game-playing agent cannot see much far satisfactory into the search space. Now if we take an example of the horizon problem given in Russell and Norvig is a case of promoting a pawn to a queen in chess. In the board state we notice, it is ordinary they can present, this can be forestalled for a optimistic number of moves. However, to just with a cutoff search at a certain depth, this is almost inevitability cannot be noticed until it too late. Now it is likely that there the agent trying to forestall the move on this would have been improved off doing something else with the moves it had offered.
In the card game example above, game states are collections of cards, now then a possible evaluation function would be to add up the card values and take which is, if it was an even number, but score it as zero if the sum is an odd number. So then this type of evaluation function matches exactly with the authentic scores in goal states, where it is considered like not a good idea. Imagine the cards dealt were: 10, 3, 7 and 9. But if player one was forced to cutoff the search after only the first card choice in alternative, so after then the cards would score: 10, 0, 0 and 0 respectively. Just because player one would choose card 10, which would be disastrous, as in this will inevitably lead to player one losing such game by atleast twelve points. However if we scale the game up to choosing cards from 40 rather than 4, we can diagnose that if there a very sophisticated heuristic involving the cards vanished unchosen would be a best idea.