Parallel tangent learning algorithm for training artificial neural networks

dc.contributor.authorGhorbani, Ali, A.
dc.contributor.authorBhavsar, Virendra, C.
dc.date.accessioned2023-03-01T18:27:53Z
dc.date.available2023-03-01T18:27:53Z
dc.date.issued1993
dc.description.abstractA modified backpropagation training algorithm using deflecting gradient technique is proposed. Parallel tangent (Partan) gradient is used as a deflecting method to accelerate the convergence. This method can also be thought as a particular implementation of the method of conjugate gradient. Partan gradient consists of two phases namely, climbing through gradient and accelerating through parallel tangent. Partan overcomes the inefficiency of zigzagging in the conventional backpropagation learning algorithm by deflecting the gradient through acceleration phase. The effectiveness of the proposed method in decreasing the rate of convergence is investigated by applying it to four learning problems with different error surfaces. It is found through simulation that regardless of the degree of the complexity of the problems used, the Partan backpropagation algorithm shows faster rate of convergence to the solution. In particular, for the exclusive-or problem its convergence time is approximately five times faster than that of standard backpropagation, whereas about two times faster rate of convergence is obtained for Encoder/Decoder, Binary-to-local, and Sonar problems.
dc.description.copyrightCopyright @ Ali A. Ghorbani and Virendra C. Bhavsar, 1993.
dc.identifier.urihttps://unbscholar.lib.unb.ca/handle/1882/14794
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.subject.disciplineComputer Science
dc.titleParallel tangent learning algorithm for training artificial neural networks
dc.typetechnical report

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
item.pdf
Size:
540.07 KB
Format:
Adobe Portable Document Format

Collections