Incremental communication for artificial neural networks
dc.contributor.author | Ghorbani, Ali, A. | |
dc.contributor.author | Bhavsar, Virendra, C. | |
dc.date.accessioned | 2023-03-01T18:26:50Z | |
dc.date.available | 2023-03-01T18:26:50Z | |
dc.date.issued | 1993 | |
dc.description.abstract | A learning procedure based on the backpropagation algorithm using the incremental communication is presented. In the incremental communication method instead of communicating the whole value of a variable, the increment or decrement to its previous value is only sent on a communication link. The incremental value may be either a fixed-point or a floating-point value. The method is applied to four different error backpropagation networks and the effect of the precision of the incremental values of activation, weights and error signals on the convergence behavior is examined. It is shown through simulation that at least 7-bit precision in fixed-point and 2-digit precision in floating-point representations are required for the network to generalize. With 12-bit fixed-point or 4-digit floating-point precision almost the same results are obtained as that with the conventional communication using 32-bit precision. The proposed method of communication can lead to enormous savings in the communication cost for implementations of artificial neural networks on parallel computers as well as direct hardware realizations. This method is applicable to many other types of artificial neural systems and can be incorporated along with the other limited precision strategies for representation of variables suggested in literature. | |
dc.description.copyright | Copyright @ Ali A. Ghorbani and Virendra C. Bhavsar, 1993. | |
dc.identifier.uri | https://unbscholar.lib.unb.ca/handle/1882/14638 | |
dc.rights | http://purl.org/coar/access_right/c_abf2 | |
dc.subject.discipline | Computer Science | |
dc.title | Incremental communication for artificial neural networks | |
dc.type | technical report |
Files
Original bundle
1 - 1 of 1