Frontiers'96, the Sixth Symposium on the Frontiers of Massively Parallel Computation : October 27-31, 1996, Annapolis, Maryland : Proceedings
Papers from the October 1996 symposium combine perspectives on architecture applications and systems, with special focus on future systems concepts, especially petaflops computing. Includes sections on scheduling and routing, applications and algorithms, petaflops computing and point design studies, SIMD, I/O techniques, memory management, synchronization, networks, and performance analysis. Specific subjects include a quasi-barrier technique to improve performance of an irregular application, hardware-controlled prefeching in directory-based cache coherent systems, and point designs for 100 TF computers using PIM technologies. No index. Annotation copyrighted by Book News, Inc., Portland, OR.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Integrating Polling Interrupts and Thread Management
Analysis of DeadlockFree PathBased Wormhole Multicasting
Turn Grouping for Efficient Multicast in Wormhole Mesh Networks
26 other sections not shown
access patterns ADIO algorithm allocation analysis application architecture autonomy bandwidth benchmark block buffer cache cache coherence cessors checkpointing chip Clos networks column communication compiler cost destination nodes disk resident array dynamic efficient execution fault faulty switches Figure file system function global graph Gröbner basis hardware high performance hypercube IEEE implementation instruction Intel Paragon interconnection network interface interrupts iteration KUMP/D latency load lock loop machine Maspar matrix mesh MIMD MPI-IO multicast multiple multiprocessor number of processors operations optimal overhead Paragon parallel computer parallel program particle partitioned petaflops phase PMPIO polling prefetching problem processor queue request rithm router routing runtime scalable scheduling scheme SDSC sequential shared memory SIMD simulation sparse matrix star graph step storage supernodes synchronization Table tasks technique tertiary storage thread tion total number user scenario