Ida* search-artificial intelligence, Basic Computer Science

IDA* Search-Artificial intelligence:

A* search is a sophisticated and successful search strategy. yet, a problem with A* search is that it might  keep every states in its memory, so memory is frequently a much bigger consideration than time in designing agents to undertake A* searches. We solve  the same problem with breadth first search by using an  IDS, and we do similar with A*.

Like IDS, an IDA* search is a series of depth first searches where the depth is increased after every iteration. yet, the depth is not measured in terms of the path length, like  it is in IDS, but rather in terms of the A*  collect function f(n) as described above. To do this, we have to define contours as regions of the search space containing states where it is below some restriction for every the states, as shown pictorially here:

 

2348_IDA- Search-Artificial intelligence.dib

 We see that we could discard M2 straight away, so saving ourselves 3 nodes in the search space. We could reject M3 after we came across the nine, and in the end M4 turns out to be better than M1 for player one. In conclusion, using alpha-beta pruning, we ignored looking at 5 end nodes out of 16 - around 30%. If the calculation to assess the scores at end-game states (or estimate them with an evaluation function) is computationally costly, then this saving could enable a greatly larger search. In addition, t his kind of pruning may occur anywhere on the tree. The general principles are that:

1. Given a node N which may be decide by player one, then if there is another node, X, along any path, likewise that (a) X may be chosen by player two (b) X is on a higher level than N and (c) X has been shown to guarantee a bad score for player one than N, then every the nodes with the same parent as N may be pruned.

2. Given a node N which may be selected by player two, then if there is a node X along any path such that (a) player one may select X (b) X is on a higher level than N and (c) X has been shown to guarantee a better score for player one than N, then every nodes with the same parent as N can be pruned.

As an quection which of these principles did we use in the M1  - M4  pruning example above? (To make it simple, I've written on the N's and X's).

Because we may prune using the alpha-beta method, it makes logic to perform a depth-first search using the minima principle. Compared to a breadth first search, a depth primary search will get to goal states early, and this information may be used to determine the scores guaranteed for a player at specific board states, which used to perform alpha-beta pruning. If a game-playing agent used a breadth first search instead, then only accurate at the end of the search would it reach the goal states and start to perform minima calculations. Thus, the agent would miss much potential to perform pruning.

Utilize a depth first search and alpha-beta pruning is sensitive to the order in which we try operators in our search. In above example, if we had selected to look at move M4 first, then we would have been able to do more pruning, due to the high minimum value (11) from that branch. Regularly it is worth spending some time working out how best to order a set of operators, as this will really increase the amount of pruning that can occur.

It's clear that a depth-first minimax search with alpha-beta pruning search dominates minimax search alone. Actually, if the effective branching rate of a typical minimax search was b, then using

Alpha-beta pruning will decrease this rate tob. In chess, this means that the valuable branching rate reduces from thirty to around six, meaning that alpha-beta search may look further moves ahead than a normal minimax search with cutoff.

Each node in a contour scores less than a specific value and IDA* search agents are told how much to increase the contour limit by on each iteration. This defines the depth for successive searches. When using contours, it is valuable for the function f(n) to be monotonic,for example , f is monotonic if whenever an operator takes a state s1  to a state s2, then f(s2) >= f(s1). In other words, if the value of f always increases along  a path, then  f  is  monotonic.  As an exercise,  why do  we want  monotonicity to ensure optimality in IDA* search?

Posted Date: 10/2/2012 2:47:28 AM | Location : United States







Related Discussions:- Ida* search-artificial intelligence, Assignment Help, Ask Question on Ida* search-artificial intelligence, Get Answer, Expert's Help, Ida* search-artificial intelligence Discussions

Write discussion on Ida* search-artificial intelligence
Your posts are moderated
Related Questions

Task 1 Study various types of systems and simulate a system of your own. Task 2 Study various models and enumerate the various steps for designing a corporate model

A utonomous Rational Agents: In many cases, it is not accurate to talk about a particular program or a particular  robot, as the combination of and software and hardware in so

Advantages:- • Sharing Treads permit the sharing of a lot resources that cannot be shared in process, for instance, sharing code section, data section, Operating System resource

Our instructor gave us a project in making a mechanical game or simple device using assembly language, can anyone give me a an example of a project description for our proposal?

To find the minimum number of shelves in the loading process in cars

A hash sign (#) that is not within a string literal begins a comment. All characters later than the # and up to the physical line end are division of the comment, and the Python in


If L is a regular language show that L U {a} is regular

VDU (Visual Display Unit) output Make sure that VDUs are of an appropriate type. They can cause severe eyestrain and tension if you do not check: (a) Colour (