+1-415-670-9189

info@expertsmind.com

# Adaptive Process Assignment Help

###
Neural Networks - Adaptive Process
**Adaptive Process**

The final process is the synaptic adaptive process, in the self-organized formation of a feature map. For the network to be self-organizing, the synaptic weight vector W_{j} of neuron j in the network is needed to change in relation to the input vector X.

The query is how to make the change. In Hebb's postulate of learning, a synaptic weight is increased along with a simultaneous happens of pre-synaptic and post-synaptic activities.

The employ of such a rule is well deserved for associative learning. For the type of un-supervised learning being considered now, the Hebbian hypothesis however, in its basic form is unsatisfactory for the given reasonas:

Changes in connectivity occur in one direction only, which at last derive all the synaptic weights into saturation. To overcome such problem, the Hebbian hypothesis is changed by including a forgetting term - g (y_{j}) W_{j}, where W_{j} is the synaptic weight vector of neuron j and g (y_{j}) is some positive scalar function of the response y_{j}. The only requirement imposed on the function g (y_{i}) is that the constraint term in the Taylor series expansion of g (y_{j}) be zero, hence:

** g(y**_{i}) = 0 for y_{j} = 0................Eqn(18)

The important of such requirement will become apparent momentarily. Specified such a function, the change to the weight vector of neuron j in the lattice can be found as given below:

** ΔW**_{j} = ηy_{j}x - g(y_{j}) W_{j}.................Eqn(19)

Whereas η is the learning rate parameter of the algorithm

The first term on right hand side of eq.19 is the Hebbian term and the second term is the forgetting term. To suit the requirement of Eq.1), a liner function for g(y_{j}) is selected as shown by:

**g(y**_{j}) =ηy_{j}...................................Eqn(20)

Eq. 19 is now simplified by setting as

** y**_{j} = h_{j,i(x)}..................................Eqn(21)

by using Eqs.19, 20, and 21, this is obtained

**ΔW**_{j} = ηh_{j,i(x)}(x - w_{j})....................Eqn(22)

Finally, using discrete-time formalism, given the synaptic weight vector Wj (n) of neuron j at time n, update weight vector W_{j} (n + 1) at time n + 1 is defined by:

** W**_{j}(n + 1) = W_{j}(n) + η(n)h_{j,i(x)} (n) (x - w_{j}) (n)...............Eqn(23)

Which is applied to every neurons in the lattice that lie inside the topological neighborhood of winning neuron i. Eq. 23 has the effect of moving the synaptic weight vector W_{i} of winning neuron i towards the input vector X. Therefore algorithm leads to a topological ordering of the feature map in the input space in the sense that neurons are adjacent in the lattice will tend to have same synaptic weight vectors. Eq. 23 is the desired formula for computing the synaptic weights of the feature map.

Expertsmind’s world class education services

We at www.expertsmind.com offer email based assignment help – homework help and projects assistance from k-12 academic level to college and university level and management and engineering studies. Our experts are helping students in their studies and they offer instant tutoring assistance giving their best practiced knowledge and spreading their world class education services through e-Learning program.

- Quality assignment help assistance 24x7 hrs

- Best qualified tutor’s network

- Time on delivery

- Quality assurance before delivery

- 100% originality and fresh work

**Adaptive Process**

_{j}of neuron j in the network is needed to change in relation to the input vector X.

_{j}) W

_{j}, where W

_{j}is the synaptic weight vector of neuron j and g (y

_{j}) is some positive scalar function of the response y

_{j}. The only requirement imposed on the function g (y

_{i}) is that the constraint term in the Taylor series expansion of g (y

_{j}) be zero, hence:

**g(y**

_{i}) = 0 for y_{j}= 0................Eqn(18)**ΔW**

_{j}= ηy_{j}x - g(y_{j}) W_{j}.................Eqn(19)_{j}) is selected as shown by:

**g(y**

_{j}) =ηy_{j}...................................Eqn(20)**y**

_{j}= h_{j,i(x)}..................................Eqn(21)**ΔW**

_{j}= ηh_{j,i(x)}(x - w_{j})....................Eqn(22)_{j}(n + 1) at time n + 1 is defined by:

**W**

_{j}(n + 1) = W_{j}(n) + η(n)h_{j,i(x)}(n) (x - w_{j}) (n)...............Eqn(23)_{i}of winning neuron i towards the input vector X. Therefore algorithm leads to a topological ordering of the feature map in the input space in the sense that neurons are adjacent in the lattice will tend to have same synaptic weight vectors. Eq. 23 is the desired formula for computing the synaptic weights of the feature map.