BUS-BASED SHARED-MEMORY MULTIPROCESSOR
Abstract
Cache Only Memory Access (COMA) multiprocessors support scalable coherent
shared memory with a uniform memory access programming model. The cache-based
organization of memory results in long memory access latencies [2].Latency hiding
mechanisms can reduce effective memory latency by making data present in a processor’s
local memory by the time the data is needed. In this paper, we study the effectiveness of
latency hiding mechanisms on the KSR2 multiprocessor in improving the performance of three
programs. The communication patterns of each program are analyzed and mechanisms for
latency hiding are applied. DICE is a shared-bus multiprocessor based on a distributed shared
memory architecture, known as cache-only memory architecture(COMA). [3]Unlike previous
COMA proposals for large-scale multiprocessing, DICE utilizes COMA to effectively decrease
the speed gap between modem high-performance microprocessors and the bus. DICE tries to
optimize COMA for a shared-bus medium, in particular to reduce detrimental effects of the
cache coherence and the ‘last memory block’ problem on replacement. In this paper, we
present a global bus design of DICE based on the IEEE future bus 1 backplane bus and the Texas
Instruments chip-set.[5]
Downloads
Author(s) and co-author(s) jointly and severally represent and warrant that the Article is original with the author(s) and does not infringe any copyright or violate any other right of any third parties, and that the Article has not been published elsewhere. Author(s) agree to the terms that the IJRDO Journal will have the full right to remove the published article on any misconduct found in the published article.