Incremental communication for artificial neural networks

Loading...
Thumbnail Image

Date

1993

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

A learning procedure based on the backpropagation algorithm using the incremental communication is presented. In the incremental communication method instead of communicating the whole value of a variable, the increment or decrement to its previous value is only sent on a communication link. The incremental value may be either a fixed-point or a floating-point value. The method is applied to four different error backpropagation networks and the effect of the precision of the incremental values of activation, weights and error signals on the convergence behavior is examined. It is shown through simulation that at least 7-bit precision in fixed-point and 2-digit precision in floating-point representations are required for the network to generalize. With 12-bit fixed-point or 4-digit floating-point precision almost the same results are obtained as that with the conventional communication using 32-bit precision. The proposed method of communication can lead to enormous savings in the communication cost for implementations of artificial neural networks on parallel computers as well as direct hardware realizations. This method is applicable to many other types of artificial neural systems and can be incorporated along with the other limited precision strategies for representation of variables suggested in literature.

Description

Keywords

Citation

Collections