Parallel tangent learning algorithm for training artificial neural networks
A modified backpropagation training algorithm using deflecting gradient technique is proposed. Parallel tangent (Partan) gradient is used as a deflecting method to accelerate the convergence. This method can also be thought as a particular implementation of the method of conjugate gradient. Partan gradient consists of two phases namely, climbing through gradient and accelerating through parallel tangent. Partan overcomes the inefficiency of zigzagging in the conventional backpropagation learning algorithm by deflecting the gradient through acceleration phase. The effectiveness of the proposed method in decreasing the rate of convergence is investigated by applying it to four learning problems with different error surfaces. It is found through simulation that regardless of the degree of the complexity of the problems used, the Partan backpropagation algorithm shows faster rate of convergence to the solution. In particular, for the exclusive-or problem its convergence time is approximately five times faster than that of standard backpropagation, whereas about two times faster rate of convergence is obtained for Encoder/Decoder, Binary-to-local, and Sonar problems.