Weight training calculations - artificial intelligence, Computer Engineering

Weight Training Calculations -Artificial intelligence:

Because we have more weights in our network than in perceptrons, first we have to introduce the notation: wij to denote the weight between unit i and unit j. As with perceptrons, we will calculate a value Δij to add up to each weight in the network afterwards an example has been tried. To calculate the weight changes for a specific example, E, first we begin with the information regarding how the network would perform for E. That's, we write down the target values ti(E) that each output unit Oi  would produce for E. Note that, for categorization problems, ti(E) will be 0  for  all  the  output  units  except  1,  which  is  the  unit  associated  with  the  right categorisation for E. For that unit, ti(E) will be 1.

736_Weight Training Calculations.png

Next, example E is propagated through the network so we may record all the observed values oi(E) for the output nodes Oi. At the same time, we record all the calculated values hi (E) for the hidden nodes. For each output unit Ok, then, we calculate its error term as follows:

1966_Weight Training Calculations1.png

The error terms from the output units are utilized to calculate error terms for the hidden units. In actual fact, this method gets its name because we propagate this information backwards through the network. For each hidden unit Hk, we calculate the error term in following manner:

In English language, this means that we take the error term for the entire output unit and multiply it by the weight from hidden unit Hk to the output unit. Then we add all these together and multiply the sum by hk(E)*(1 - hk(E)).

Having calculated all the error values connected with each unit (hidden and output), now we may transfer this information into the weight changes Δij between units i and j. The calculation is as following: for weights wij between input unit Ii and hidden unit Hj, we add on:

[Remembering that xi  is the input to the i-th input node i.e. E; that η is a small value known as the learning rate and that δHj is the error value we calculated for hidden node Hj utilizing the formula above].

For weights wij among hidden unit Hi and output unit Oj, we add on:

2491_Weight Training Calculations2.png

[Remembering that hi (E) is the output from hidden node Hi when example E is propagated through the network and that δOj is the error value we calculated for output node Oj utilizing the formula above].

2128_Weight Training Calculations3.png

Each alteration Δ is added to the weights and this concludes the calculation i.e. E. The next instance is then used to tweak the weights further. As with perceptrons, the learning speed is used to ensure that the weights are just moved a small distance for each particular example, so that the training for earlier examples is not lost. Note down that the mathematical derivation for the above calculations is based on derivative of σ that we discussed above. For total description of this, see chapter 4 of Tom Mitchell's book "Machine Learning".

Posted Date: 10/2/2012 6:31:48 AM | Location : United States







Related Discussions:- Weight training calculations - artificial intelligence, Assignment Help, Ask Question on Weight training calculations - artificial intelligence, Get Answer, Expert's Help, Weight training calculations - artificial intelligence Discussions

Write discussion on Weight training calculations - artificial intelligence
Your posts are moderated
Related Questions
Character and String Processing Instructions: String manipulation usually is done in memory. Possible instructions comprise COMPARE STRING, COMPARE CHARACTER, MOVE STRING and MOVE

This assignment is based around two UK websites. In order to avoid difficulties such as the websites changing, or disappearing, we have downloaded these websites and you can find t

A parser which is a variant of top-down parsing without backtracking is? Ans. Recursive Descend is a variant of top-down parsing without backtracking.

Concept of Multithreading: This problem rises in design of large scale multiprocessors like MPP. So a solution for optimizing this latency must be acquired at. The idea of Mul

write a programme to simulate a train station to automate

Iterative Deepening Search: So, breadth first search is always guaranteed to find a solution (if one exists), actually it eats all the memory. For the depth first search, ther

Comparison between Motorola processors and INTEL processors: Intel/AMD processors are really about the same thing.  They run the same software and operate in a very similar ma

WestEast College hires you as a systems analyst to design its new admission/registration system. WestEast College is one of the top ranked schools in the United States. It is a

LINQ Providers are a set of classes that takes a LINQ query and dynamically produces a method that implements an equivalent query against an exact data source.

History of Information Technology and Organisations The increasing sophistication in information systems and the growth in their use have been influenced by three main factors