| While the past few years have witnessed an unprecedented advance in the status of parallel computing hardware, software has not caught up with this pace of development. Our effort has been focused on the development of efficient algorithms and software for high-speed parallel scientific computing in an effort to meet this demand.;This thesis presents the theory and design of a new distributed computing system, the Self-Synchronizing Concurrent Computing System (SESYCCS), for efficient solution of a large class of compute-bound scientific problems. This thesis establishes that separating synchronization from computation has a number of merits, especially in boosting the efficiency of implementation and reducing memory requirements.;In this thesis, we propose two new methods for distributed computation; Static Computation Graphs (SCGs) and Dynamic Computation Graphs (DCGs), and a robust theory is developed for understanding their behavior. A new algorithm is proposed for efficient self-synchronization for SCGs that optimizes computing resource allocation.;We present new algorithms for self-synchronization for DCGs and derive concrete quantitative results for the efficiency of their implementation. We study in some detail the tradeoff between finite memory and speed of computation.;Application of the algorithms to simulation of discrete-event systems is described and a new algorithm, Wolf, is proposed and analyzed that promises a high processor utilization along with a significant speedup in the computation. |