Ida* search-artificial intelligence, Basic Computer Science

IDA* Search-Artificial intelligence:

A* search is a sophisticated and successful search strategy. yet, a problem with A* search is that it might  keep every states in its memory, so memory is frequently a much bigger consideration than time in designing agents to undertake A* searches. We solve  the same problem with breadth first search by using an  IDS, and we do similar with A*.

Like IDS, an IDA* search is a series of depth first searches where the depth is increased after every iteration. yet, the depth is not measured in terms of the path length, like  it is in IDS, but rather in terms of the A*  collect function f(n) as described above. To do this, we have to define contours as regions of the search space containing states where it is below some restriction for every the states, as shown pictorially here:

 

2348_IDA- Search-Artificial intelligence.dib

 We see that we could discard M2 straight away, so saving ourselves 3 nodes in the search space. We could reject M3 after we came across the nine, and in the end M4 turns out to be better than M1 for player one. In conclusion, using alpha-beta pruning, we ignored looking at 5 end nodes out of 16 - around 30%. If the calculation to assess the scores at end-game states (or estimate them with an evaluation function) is computationally costly, then this saving could enable a greatly larger search. In addition, t his kind of pruning may occur anywhere on the tree. The general principles are that:

1. Given a node N which may be decide by player one, then if there is another node, X, along any path, likewise that (a) X may be chosen by player two (b) X is on a higher level than N and (c) X has been shown to guarantee a bad score for player one than N, then every the nodes with the same parent as N may be pruned.

2. Given a node N which may be selected by player two, then if there is a node X along any path such that (a) player one may select X (b) X is on a higher level than N and (c) X has been shown to guarantee a better score for player one than N, then every nodes with the same parent as N can be pruned.

As an quection which of these principles did we use in the M1  - M4  pruning example above? (To make it simple, I've written on the N's and X's).

Because we may prune using the alpha-beta method, it makes logic to perform a depth-first search using the minima principle. Compared to a breadth first search, a depth primary search will get to goal states early, and this information may be used to determine the scores guaranteed for a player at specific board states, which used to perform alpha-beta pruning. If a game-playing agent used a breadth first search instead, then only accurate at the end of the search would it reach the goal states and start to perform minima calculations. Thus, the agent would miss much potential to perform pruning.

Utilize a depth first search and alpha-beta pruning is sensitive to the order in which we try operators in our search. In above example, if we had selected to look at move M4 first, then we would have been able to do more pruning, due to the high minimum value (11) from that branch. Regularly it is worth spending some time working out how best to order a set of operators, as this will really increase the amount of pruning that can occur.

It's clear that a depth-first minimax search with alpha-beta pruning search dominates minimax search alone. Actually, if the effective branching rate of a typical minimax search was b, then using

Alpha-beta pruning will decrease this rate tob. In chess, this means that the valuable branching rate reduces from thirty to around six, meaning that alpha-beta search may look further moves ahead than a normal minimax search with cutoff.

Each node in a contour scores less than a specific value and IDA* search agents are told how much to increase the contour limit by on each iteration. This defines the depth for successive searches. When using contours, it is valuable for the function f(n) to be monotonic,for example , f is monotonic if whenever an operator takes a state s1  to a state s2, then f(s2) >= f(s1). In other words, if the value of f always increases along  a path, then  f  is  monotonic.  As an exercise,  why do  we want  monotonicity to ensure optimality in IDA* search?

Posted Date: 10/2/2012 2:47:28 AM | Location : United States







Related Discussions:- Ida* search-artificial intelligence, Assignment Help, Ask Question on Ida* search-artificial intelligence, Get Answer, Expert's Help, Ida* search-artificial intelligence Discussions

Write discussion on Ida* search-artificial intelligence
Your posts are moderated
Related Questions
Problem: a) Explain the basic functions of a computer: inputting, processing, storing and outputting. b) List four types of memory and explain how each wor


Concept of operating system: An operating system is an essential software component of a computer system. The basic  objectives of an operating system are to make the computer

Question 1 Explain Wireless Protocol Requirements and also explain in brief medium access control protocol Question 2 Explain FDMA and TDMA concepts Question 3 Exp

Desktop computer: Desktop computer is popularly known as personal computer (PC). As the name suggest, it is generally small in size and fitted on the top of a desk which can b

can you help me to do my assembly program homework

how will a poorly conducted feasibility study affect an implemented system

Question: Using a questionnaire is a popular way commonly used to evaluate interfaces. (a) Describe advantages of using questionnaires as a means to evaluate interfaces.

I have an assignment i need it to get it done How much would I be charged for that?

Software interruptions can be thoroughly activated by the assembler invoking the number of the preferred interruption with the INT instruction. The use of interruptions aids us in