Browsing by Author "Bhavsar, Virendrakumar, C."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item 3-D visualization of message passing in distributed programs(1996) Gobrecht, Cyril; Ware, Colin; Bhavsar, Virendrakumar, C.This paper describes PVMtrace, a software system for understanding and debugging message passing in distributed programs. In this system each process is represented as a node in a graph and the arcs represent the potential communication channels. The transfer of a packet of information from one process to another is shown by a bead-like object moving along an arc. A queue is a string of beads lined up waiting to be processed. The flexible control over time is found to be essential for the system to be useful and it allows the animation to be played forward and backwards in time both by the direct manipulation of the time line and by an animation rate controller. In addition there is an automatic time control method that rapidly moves the animation through intervals of low activity while slowing down in periods of high activity. The system was initially developed as a PVM debugging tool but it has also been found to be useful in other areas. In particular it has been used to visualize communications in a cellular phone system.Item Incremental communication for multilayer neural networks: error analysis(1996) Ghorbani, Ali, A.; Bhavsar, Virendrakumar, C.Artificial neural networks (ANNs) involve large amount of inter-node communications. To reduce the communication cost as well as time of learning process in ANNs, we have earlier proposed incremental inter-node communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent on a communication link. In this paper, the effects of limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision on the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbation in the input(s)/output of a node does not enforce instability. The analysis is supported by simulation studies of two problems.